Skip to content

Career Positioning for Frontend Engineers in the AI Era

How Can a 25-Year-Old Ride the Wave Without Drowning? — An Expert Roundtable

AI 時代前端工程師職涯定位

25 歲工程師如何乘浪而不被淹沒?——專家圓桌激辯


"The bottleneck is never the code. The bottleneck is always the understanding." — William Yeh

「瓶頸從來不是程式碼本身,瓶頸永遠是理解力。」—— 葉大師 William Yeh


Roundtable Participants:

  • William Yeh (葉大師) | DevOps/Infra senior consultant, Theory of Constraints (TOC) advocate, argues the bottleneck has migrated from "writing code" to "defining specs and governance"
  • Ya-Ching Chang (張雅晴) | HR Director in Taiwan's tech industry, 15 years of IT recruiting experience, observes companies already cutting junior frontend positions
  • Kent Beck | Creator of Extreme Programming, TDD pioneer, argues the AI era demands even stronger design and testing discipline
  • Hong-Zhi Lin (林宏志) | 10-year senior frontend engineer, survived the jQuery → React → Next.js transitions, believes frontend complexity is vastly underestimated
  • Sarah Chen | Silicon Valley Staff Engineer, former Meta frontend architect, views AI as a tool — not a replacement
  • Dr. Ming-Zhe Chen (陳明哲) | Professor at NTU CSIE, AI and software engineering researcher, focused on the real boundaries of AI capabilities

Moderator: We have a specific scenario today. A 25-year-old frontend engineer in Taiwan — title says Senior, self-assessment says mid-level — wants to know: how do I survive the AI wave? How do I ride it instead of getting crushed? Each of you has 30 seconds for your opening position. William, you go first.

圓桌會議參與者:

  • 葉大師 (William Yeh) | DevOps/Infra 資深顧問,TOC 約束理論倡導者,主張瓶頸已從寫程式遷移到規格定義與治理
  • 張雅晴 | 台灣科技業人資總監,15 年 IT 招募經驗,觀察到企業已開始縮減初階前端職缺
  • Kent Beck | Extreme Programming 創始人,TDD 先驅,主張 AI 時代更需要設計與測試思維
  • 林宏志 | 10 年資深前端工程師,從 jQuery 一路走到 React/Next.js,認為前端的複雜度被低估
  • Sarah Chen | 矽谷 Staff Engineer,曾任 Meta 前端架構師,認為 AI 是工具而非替代品
  • Dr. 陳明哲 | 台大資工系教授,AI 與軟體工程研究者,關注 AI 能力的真實邊界

主持人: 今天有個具體情境。一位 25 歲的台灣前端工程師——職稱寫 Senior,自評中階——想知道:我要怎麼在 AI 浪潮中存活?怎麼乘浪而不是被淹沒?每位先用 30 秒表明核心立場。葉大師,你先。


Opening: Each Expert's Core Position

William Yeh: The bottleneck has moved. Period. The Theory of Constraints tells us: when one bottleneck is relieved, another emerges. AI relieved the "writing code" bottleneck. Now the bottleneck is spec definition, verification, and governance. If you're still optimizing your ability to write code faster, you're optimizing a non-bottleneck. That's the definition of waste. This 25-year-old needs to pivot — not in five years, NOW.

Ya-Ching Chang: I'll give you the numbers. In 2024, my company posted 12 junior frontend openings. In 2025, we posted 4. For 2026, we've posted exactly 1 — and it's a combo role with backend responsibilities. The market is speaking. Companies are not waiting for a consensus on whether AI replaces frontend engineers — they're already acting on the assumption that it will.

Kent Beck: I've seen this pattern before. When IDEs got autocomplete, people said programmers would be obsolete. When Rails launched, people said web developers would be obsolete. They weren't. But the ones who survived were the ones who understood design — not just typing. AI is the most powerful autocomplete ever built. The engineers who thrive will be those who know what to ask for, how to verify it, and how to design systems that remain maintainable. TDD isn't dead — it's more important than ever.

Hong-Zhi Lin: leans forward Everyone's talking about AI replacing frontend work like it's just "make a button blue." I've spent 10 years debugging cross-browser rendering, accessibility edge cases, complex state management with race conditions, and UX micro-interactions that no designer specced properly. Show me an AI that can handle a11y compliance across 47 screen reader combinations. I'll wait.

Sarah Chen: At Meta, we used AI tools extensively. They're incredible for boilerplate — I'd estimate 40-60% productivity gains on routine tasks. But every Staff+ engineer I know spends maybe 20% of their time writing code. The rest is architecture decisions, cross-team negotiation, debugging production incidents at 3 AM with incomplete information, and mentoring. AI doesn't do any of that. This 25-year-old shouldn't panic — but they should absolutely start building the skills that AI can't replicate.

Dr. Ming-Zhe Chen: Let me inject some empirical rigor. LLMs are semantically non-deterministic — same prompt, different outputs. This is not a bug; it's a fundamental property. Our lab tested GPT-4, Claude 3.5, and Gemini Ultra on 500 real-world frontend tasks. For well-specified, isolated component tasks, accuracy was 78-85%. For tasks requiring cross-component coordination, it dropped to 31-40%. For tasks involving ambiguous requirements — the kind product managers actually give you — it was 12-18%. The gap between "AI can write a React component" and "AI can build a production frontend" is enormous.

Ya-Ching Chang: Before we move on — I want to challenge Dr. Chen's numbers directly. You said 12-18% accuracy on ambiguous requirements. But companies don't care about accuracy on ambiguous requirements in isolation. They care about total cost of development. If AI can do the easy 60% of tasks, and companies can hire one senior engineer to handle the hard 40% — instead of three mid-level engineers to handle all of it — that's still a workforce reduction. The math doesn't need AI to be perfect. It needs AI to be good enough to change the economics.

Dr. Ming-Zhe Chen: Ya-Ching, that's a valid economic argument, but it assumes companies can cleanly separate the "easy 60%" from the "hard 40%." In practice, they're interleaved. A feature that looks like a closed problem often turns into an open problem when you hit edge cases. If you don't have engineers who have practiced on the easy problems, they won't have the pattern recognition needed for the hard ones. It's the Dreyfus model again — you can't skip stages.

William Yeh: decisively Both of you are right, and that's precisely why the bottleneck is migrating. The old model: hire 5 mid-level engineers, some do easy work, some do hard work, everyone develops gradually. The new model: AI handles routine work, you need 2 engineers who can handle the hard work from day one. The development pipeline for engineers has been compressed. That's not a prediction — it's already happening.

Hong-Zhi Lin: sarcastically Great, so we've invented a world where junior engineers can't get jobs to develop the experience that would make them senior. What could go wrong?

Kent Beck: laughing That's been the paradox of every productivity tool in history. And every time, the answer has been the same: the engineers who invest in themselves during the transition thrive. The ones who wait for the market to sort itself out don't. Our job tonight is to make sure this 25-year-old is in the first group.

Hong-Zhi Lin: Fine. But let's make sure our advice is actually actionable for someone earning NT$80K/month in Taipei, not just inspirational for a Silicon Valley audience. Promise me we'll get specific.

Sarah Chen: Agreed. No hand-waving.

Dr. Ming-Zhe Chen: Data over opinions. That's my commitment for tonight.

Moderator: Battle lines drawn. Let's get into it.

開場:每位專家的核心立場

葉大師: 瓶頸已經轉移了,句號。TOC 約束理論告訴我們:當一個瓶頸被解除,另一個就會浮現。AI 解除了「寫程式碼」的瓶頸,現在瓶頸在規格定義、驗證和治理。如果你還在優化寫程式碼的速度,你就是在優化非瓶頸——那是浪費的定義。這位 25 歲的工程師需要轉型——不是五年後,是現在。

張雅晴: 我給你數據。2024 年,我們公司開了 12 個初階前端職缺。2025 年,4 個。2026 年到目前為止,開了整整 1 個——而且是前後端混合職缺。市場正在說話。企業不是在等「AI 是否會取代前端工程師」的共識——他們已經在假設會的前提下行動了。

Kent Beck: 我看過這個模式。當 IDE 有了 autocomplete,人們說程式設計師要被淘汰。當 Rails 問世,人們說 web 開發者要被淘汰。結果都沒有。但存活下來的,是那些懂「設計」的人——不只是打字。AI 是史上最強大的 autocomplete。能茁壯的工程師,是那些知道要問什麼、如何驗證、如何設計可維護系統的人。TDD 沒死——它比以前更重要。

林宏志: 身體前傾 大家談 AI 取代前端工作,好像前端就是「把按鈕變藍色」。我花了 10 年在 debug 跨瀏覽器渲染、無障礙邊界案例、帶有 race condition 的複雜狀態管理,以及沒有設計師好好定規格的 UX 微互動。給我看一個能處理 47 種螢幕閱讀器組合的 a11y 合規的 AI,我等。

Sarah Chen: 在 Meta,我們大量使用 AI 工具。它們在處理樣板程式碼上非常強——我估計在例行任務上有 40-60% 的生產力提升。但我認識的每位 Staff+ 工程師,大概只花 20% 的時間寫程式碼。其餘時間是架構決策、跨團隊協商、凌晨三點在資訊不完整的情況下 debug 生產環境事故,以及帶人。AI 做不了這些。這位 25 歲的工程師不該恐慌——但絕對該開始建立 AI 無法複製的技能。

Dr. 陳明哲: 讓我注入一些實證嚴謹性。LLM 是語意不確定的——相同的 prompt,不同的輸出。這不是 bug,是基本屬性。我們實驗室測試了 GPT-4、Claude 3.5 和 Gemini Ultra 在 500 個真實前端任務上的表現。對於規格明確、獨立的元件任務,準確率 78-85%。對於需要跨元件協調的任務,降到 31-40%。對於需求模糊的任務——就是 PM 實際上會交給你的那種——只有 12-18%。「AI 能寫一個 React 元件」和「AI 能建構一個生產環境前端」之間的差距是巨大的。

張雅晴: 在繼續之前——我想直接挑戰陳教授的數字。你說模糊需求的準確率 12-18%。但公司不在意模糊需求上的單獨準確率。他們在意的是「總開發成本」。如果 AI 能做簡單的 60% 的任務,公司可以僱一位資深工程師處理困難的 40%——而不是三位中階工程師處理全部——那仍然是人力縮減。數學不需要 AI 完美。它只需要 AI 夠好到改變經濟模型。

Dr. 陳明哲: 雅晴,那是一個有效的經濟論點,但它假設公司能乾淨地把「簡單的 60%」和「困難的 40%」分開。實際上,它們是交錯的。一個看起來像封閉問題的功能,碰到邊界案例時常常變成開放問題。如果你沒有在簡單問題上練習過的工程師,他們就不會有困難問題所需的模式識別能力。又是 Dreyfus 模型——你不能跳過階段。

葉大師: 果斷地 你們兩個都對,而那正是為什麼瓶頸在遷移。舊模式:僱 5 個中階工程師,有些做簡單的工作,有些做困難的工作,每個人逐漸發展。新模式:AI 處理例行工作,你需要 2 個從第一天就能處理困難工作的工程師。工程師的發展管道被壓縮了。這不是預測——它已經在發生了。

林宏志: 諷刺地 太好了,所以我們發明了一個初階工程師找不到工作來發展經驗、而那些經驗本來會讓他們變資深的世界。還有什麼可能出問題的?

Kent Beck: 大笑 這是歷史上每一個生產力工具的悖論。而每一次,答案都是一樣的:在轉型期投資自己的工程師茁壯。等市場自己理清的工程師不會。我們今晚的工作是確保這位 25 歲的工程師在第一組。

林宏志: 好。但讓我們確保我們的建議對一個在台北月薪 NT$80K 的人是真正可行動的,而不只是對矽谷觀眾的勵志演講。答應我我們會講具體的。

Sarah Chen: 同意。不要打馬虎眼。

Dr. 陳明哲: 數據優於意見。那是我今晚的承諾。

主持人: 戰線已劃清。讓我們開始。


Round 1: The Real Threat Level of AI to Frontend Engineers

Moderator: The central question — will AI replace junior and mid-level frontend engineers within 2-3 years? William, you clearly think yes.

William Yeh: Not "replace" in the sci-fi sense. "Displace" is more accurate. Here's the TOC framework. In traditional software development, the bottleneck was writing code. Developers were expensive, slow, and in short supply. Companies hired junior engineers to alleviate this bottleneck. Now AI alleviates it far more cheaply. So the demand for humans at the coding bottleneck drops. But new bottlenecks emerge: Who writes the spec that the AI follows? Who verifies the AI's output is correct? Who decides what permissions the AI agent has? Who bears the consequences when it breaks? These are what I call the four accountability questions. Junior engineers who can't answer these questions are solving a problem that no longer needs as many humans.

Hong-Zhi Lin: William, I respect the TOC framework, but you're applying it to an oversimplified model of frontend work. Let me describe what my Monday looked like last week. I spent 3 hours debugging a CSS Grid layout that broke specifically on Safari 16.2 when the user had Dynamic Type set to XXL and the device was in landscape mode. Then I spent 2 hours working with our a11y specialist to ensure our custom dropdown component properly handled focus traps and announced state changes for VoiceOver, TalkBack, and NVDA — each of which behaves differently. Then I had a 90-minute meeting with the PM about a requirement that was literally "make the onboarding flow feel more premium." Define that as a spec for an AI.

William Yeh: You just proved my point. Everything you described is an open problem — ambiguous requirements, stakeholder negotiation, edge cases that require human judgment. That's exactly where the bottleneck is moving! I'm not saying frontend engineers disappear. I'm saying the ones who survive are the ones doing what you described — not the ones writing straightforward CRUD forms. And there are a lot more engineers writing CRUD forms than debugging Safari a11y edge cases.

Hong-Zhi Lin: Fair, but you're assuming companies understand the difference. Most Taiwanese tech companies I've worked with think all frontend work is CRUD forms. They'll cut the "a11y specialist" first because they don't understand the value.

Ya-Ching Chang: Hong-Zhi, I hate to confirm your fear, but you're right. I surveyed 38 hiring managers in Taiwan's top-50 tech companies last quarter. When I asked "which engineering roles do you plan to reduce headcount in over the next 18 months," 71% said "junior frontend." Only 12% said "senior frontend." The problem is that their definition of "junior" vs "senior" is based on years of experience, not on the open-vs-closed problem framework William is describing.

Kent Beck: This is a design literacy problem. Let me reframe it. The value of a frontend engineer was never "I can write JSX." The value was "I can translate ambiguous human needs into working, maintainable interfaces." AI can now do the JSX part — sometimes. It cannot do the translation part. But here's the catch — and this is what worries me — many engineers never learned the translation part. They went straight from bootcamp to writing components. They can implement a Figma spec pixel-perfectly, but they've never questioned whether the spec itself makes sense. Those engineers are genuinely at risk.

Sarah Chen: nods At Meta, we observed something specific. When we rolled out internal AI coding assistants in 2024, junior engineers' measurable output — PRs merged, features shipped — went up 35% on average. Sounds great, right? But six months later, we noticed production incidents involving those engineers' code went up 28%. Why? The AI helped them write code faster, but it didn't help them think about edge cases, error handling, or system interactions. They were shipping more bugs faster. We ended up increasing code review requirements for AI-assisted PRs, not decreasing them.

Dr. Ming-Zhe Chen: This aligns with our research. We call it the "velocity-quality divergence." AI tools increase code output velocity but do not proportionally increase code quality. In fact, for engineers with less than 3 years of experience, we found a statistically significant negative correlation between AI tool usage intensity and code quality metrics — measured by defect density, test coverage of edge cases, and production incident rate. The more they relied on AI, the worse their code quality became, because they were skipping the learning process that traditionally built engineering judgment.

William Yeh: And this is exactly why I say the bottleneck has moved. The verification layer — making sure the AI's output is correct, secure, performant, and accessible — that IS the new bottleneck. And it requires more skill than writing the code in the first place. You need to understand what correct means in context. That's a fundamentally human, open-problem skill.

Hong-Zhi Lin: I agree with the theory, William, but let me push back on the timeline. You said "NOW." I think we have more time than you suggest. Let me tell you what happened when our team tried to use Cursor + Claude to build a complex form with conditional validation, dynamic field dependencies, real-time preview, and accessibility compliance. After 6 hours of prompting, debugging AI output, and manual corrections, we estimated we'd have been faster doing it from scratch. The AI kept introducing subtle regressions — fix one field's validation, break another's dependency chain. LLMs don't hold state well enough for complex, interconnected frontend logic.

Dr. Ming-Zhe Chen: Hong-Zhi raises a critical point. Current LLMs have a fundamental architectural limitation for frontend work specifically: they process code as token sequences, not as the interconnected graph of state, side effects, and rendering dependencies that frontend code actually is. Until AI architectures can model reactive dependency graphs natively, complex frontend work will remain challenging for AI. That said — and I need to be intellectually honest here — progress is rapid. The gap between GPT-3.5 and Claude 3.5 on our frontend benchmarks was a 47% improvement in just 18 months. If that trajectory continues, the 31-40% accuracy on cross-component tasks could reach 60-70% by 2028.

Sarah Chen: Which means this 25-year-old has a window — maybe 3-5 years — to reposition. Not infinite time, but not zero either.

William Yeh: I'd say 2-3 years for the market shift, even if the technology isn't fully there. Because hiring decisions are based on perception, not reality. Ya-Ching, you're already seeing companies cut junior frontend roles. They're not doing it because AI can actually replace those engineers today — they're doing it because they believe it will, and they want to be ahead of the curve.

Ya-Ching Chang: Exactly. The market moves on narrative, not on benchmarks. And the narrative right now is overwhelmingly "AI makes junior developers redundant." Whether it's true is almost irrelevant for job market dynamics. If 71% of hiring managers believe it, the job openings dry up regardless of whether AI can actually do the work.

Kent Beck: firmly This is precisely why individual engineers need to take control of their own narrative. Don't wait for the market to categorize you. Categorize yourself. If you can articulate your value in terms of the open problems you solve — not the code you write — you're insulated from the narrative shift. The 25-year-old should be asking: "What problems do I solve that no prompt can express?"

Hong-Zhi Lin: interrupts Kent, that's easy to say when you're Kent Beck. This person is a 25-year-old in Taiwan earning maybe NT$80,000 a month. They don't have the luxury of philosophical identity shifts. They need to pay rent. They need practical advice.

Kent Beck: Practical advice IS philosophical. When I was 25, I was nobody. I didn't have a brand. I had a discipline. The discipline of asking "why" before implementing didn't cost me anything — and it compounded over 30 years. That's the most practical advice I can give: develop habits that compound.

Sarah Chen: I want to bridge the disagreement here. Both Kent and Hong-Zhi are right, and the synthesis is important. The immediate practical step is to keep doing your current job well — don't quit, don't panic. But while doing that, start shifting 10-20% of your energy toward the open-problem skills we're discussing. It's not an either/or. It's a gradual rebalancing.

Dr. Ming-Zhe Chen: Let me add one more data point. We surveyed 500 frontend engineers globally about their daily time allocation. For engineers with 1-3 years of experience, the split was approximately: 65% writing new code, 15% debugging, 10% meetings, 10% code review. For engineers with 7+ years: 20% writing new code, 25% debugging/investigation, 25% meetings and cross-team communication, 15% code review, 15% design and architecture. Notice the trajectory — the more experienced you get, the less time you spend writing code. AI is accelerating that trajectory, not inventing it. The 25-year-old should see AI as pushing them up this curve faster, not eliminating them.

William Yeh: That's a good reframe, Dr. Chen. Let me add the TOC perspective on that data. In the experienced engineer's time allocation, every activity except "writing new code" is a bottleneck activity — hard to automate, high human judgment required. AI is compressing the early career phase where writing code is dominant. Engineers who would have spent 5 years mostly writing code now need to develop those higher-order skills in 2-3 years. The learning curve is steepening, not flattening.

Ya-Ching Chang: And that creates a very specific market implication. Companies that used to have a clear junior → mid → senior pipeline are finding that the junior rung is shrinking. They can't gradually develop people anymore — they need engineers who can operate at the mid level from day one. That's why I'm seeing more "we want 2-3 years of experience minimum" even for roles that used to be entry-level. The entry bar is rising, and it's rising because AI handles the tasks that used to justify entry-level positions.

Hong-Zhi Lin: grudgingly Okay, I'll concede that the market perception problem is real, even if I think the technical reality is more nuanced. But I want to put something on record: the companies cutting junior frontend roles are going to regret it. In 2-3 years, they'll have a senior talent pipeline problem because they stopped developing people. They're optimizing for short-term cost reduction at the expense of long-term organizational capability. I've seen this before — it happened with offshore outsourcing in the 2000s. Companies outsourced all junior work, then couldn't find senior engineers locally because nobody had the on-ramp.

William Yeh: Hong-Zhi, that's a valid systemic concern, but it doesn't help our 25-year-old today. They can't wait for companies to realize their mistake. They need to act on the market as it is, not as it should be.

Moderator: Strong disagreement on timeline, but some convergence on direction. Let's vote.

Vote: What percentage of junior/mid frontend work can AI replace within 3 years?

ExpertVoteReasoning
William Yeh60-70%Closed-problem frontend tasks are the majority of junior work
Ya-Ching Chang50-60%Based on hiring trend data, market will act as if this is true regardless
Kent Beck40-50%Code generation yes, but design/testing judgment no
Hong-Zhi Lin20-30%Real frontend complexity is massively underestimated
Sarah Chen35-45%Boilerplate yes, but cross-system integration and UX judgment no
Dr. Ming-Zhe Chen25-35%Empirical data shows AI still struggles with interconnected frontend logic

第一回合:AI 對前端工程師的真實威脅程度

主持人: 核心問題——AI 是否會在 2-3 年內取代初中階前端工程師?葉大師,你顯然認為會。

葉大師: 不是科幻意義上的「取代」,「位移」更準確。用 TOC 框架來看:傳統軟體開發中,瓶頸是寫程式碼。開發者昂貴、緩慢、供不應求。公司僱用初階工程師來緩解這個瓶頸。現在 AI 以更低的成本緩解了它。所以人類在程式碼瓶頸上的「需求」下降。但新的瓶頸浮現了:誰來寫 AI 遵循的規格?誰來驗證 AI 的輸出是否正確?誰來決定 AI agent 有什麼權限?出事了誰來承擔後果?這就是我說的四個問責問題。無法回答這些問題的初階工程師,正在解決一個不再需要那麼多人類的問題。

林宏志: 葉大師,我尊重 TOC 框架,但你把它套用在一個過度簡化的前端工作模型上。讓我描述上週一我的工作日。我花了 3 小時 debug 一個 CSS Grid 布局,它只在 Safari 16.2 上、使用者把 Dynamic Type 設成 XXL 且裝置橫向時才會壞掉。然後我花了 2 小時跟我們的 a11y 專家合作,確保我們的自訂下拉式元件正確處理焦點陷阱,並為 VoiceOver、TalkBack 和 NVDA 宣讀狀態變更——這三個各自行為不同。接著我跟 PM 開了 90 分鐘的會議,討論一個需求,字面上就是「讓 onboarding 流程感覺更有質感」。你把「那個」定義成 AI 的規格看看。

葉大師: 你剛好證明了我的觀點。你描述的全都是「開放問題」——模糊的需求、利益相關者的協商、需要人類判斷力的邊界案例。那正是瓶頸遷移的方向!我不是說前端工程師會消失。我是說存活下來的,是做你描述的那些事的人——不是寫直白 CRUD 表單的人。而寫 CRUD 表單的工程師,遠比 debug Safari a11y 邊界案例的人多得多。

林宏志: 好,但你假設公司懂得分辨。我待過的多數台灣科技公司,認為所有前端工作都是 CRUD 表單。他們會先砍「a11y 專家」,因為他們不理解其價值。

張雅晴: 宏志,我很不想證實你的擔憂,但你是對的。我上一季調查了台灣前 50 大科技公司的 38 位用人主管。當我問「未來 18 個月你計劃縮減哪些工程職缺」時,71% 說「初階前端」。只有 12% 說「資深前端」。問題是他們對「初階」vs「資深」的定義是基於年資,不是葉大師描述的開放 vs 封閉問題框架。

Kent Beck: 這是設計素養的問題。讓我換個框架。前端工程師的價值從來不是「我能寫 JSX」。價值是「我能把模糊的人類需求轉換成可運作、可維護的介面」。AI 現在能做 JSX 的部分——有時候。但它做不了轉換的部分。但重點來了——這是讓我擔心的——很多工程師從來沒學過轉換的部分。他們直接從 bootcamp 出來就開始寫元件。他們能完美地按 Figma 規格做到 pixel-perfect,但從來沒質疑過規格本身是否合理。那些工程師確實面臨風險。

Sarah Chen: 點頭 在 Meta,我們觀察到一個具體現象。2024 年推出內部 AI coding 助手後,初階工程師的可量測產出——合併的 PR、交付的功能——平均上升了 35%。聽起來很棒,對吧?但六個月後,我們注意到涉及這些工程師程式碼的生產環境事故上升了 28%。為什麼?AI 幫他們更快地寫程式碼,但沒有幫他們思考邊界案例、錯誤處理或系統互動。他們是在更快地交付更多 bug。我們最後是「增加」了 AI 輔助 PR 的 code review 要求,而不是減少。

Dr. 陳明哲: 這與我們的研究一致。我們稱之為「速度-品質背離」。AI 工具提高了程式碼輸出速度,但沒有等比例提高程式碼品質。事實上,對於經驗不到 3 年的工程師,我們發現 AI 工具使用強度與程式碼品質指標之間存在統計上顯著的「負」相關——以缺陷密度、邊界案例測試覆蓋率和生產環境事故率衡量。他們越依賴 AI,程式碼品質越差,因為他們跳過了傳統上建立工程判斷力的學習過程。

葉大師: 這正是我說瓶頸已經轉移的原因。驗證層——確保 AI 的輸出正確、安全、高效能且無障礙——「那」就是新瓶頸。而且它需要比一開始寫程式碼更高的技能。你需要理解在特定情境下「正確」意味著什麼。那是根本上屬於人類的、開放問題的技能。

林宏志: 我同意理論,葉大師,但讓我挑戰你的時間表。你說「現在」。我認為我們比你建議的有更多時間。讓我告訴你我們團隊嘗試用 Cursor + Claude 建構一個帶有條件驗證、動態欄位依賴、即時預覽和無障礙合規的複雜表單時發生了什麼。經過 6 小時的提示、debug AI 輸出和手動修正後,我們估計從頭做還比較快。AI 不斷引入微妙的回歸——修好一個欄位的驗證,就搞壞另一個的依賴鏈。LLM 對複雜、相互關聯的前端邏輯的狀態保持能力不夠好。

Dr. 陳明哲: 宏志提出了一個關鍵觀點。目前的 LLM 對前端工作有一個根本的架構限制:它們把程式碼當作 token 序列來處理,而不是前端程式碼實際上所構成的狀態、副作用和渲染依賴的互連圖。直到 AI 架構能原生地建模「反應式依賴圖」,複雜的前端工作對 AI 來說仍然是挑戰。話雖如此——我需要在知識上誠實——進步是快速的。從 GPT-3.5 到 Claude 3.5 在我們前端 benchmark 上的表現提升了 47%,僅用了 18 個月。如果這個軌跡持續,跨元件任務的 31-40% 準確率到 2028 年可能達到 60-70%。

Sarah Chen: 這意味著這位 25 歲的工程師有一個窗口——大概 3-5 年——來重新定位。不是無限的時間,但也不是零。

葉大師: 我會說市場轉變在 2-3 年內,即使技術還沒完全到位。因為招聘決策基於認知,不是現實。雅晴,你已經看到公司在砍初階前端職缺了。他們這樣做不是因為 AI 今天真的能取代那些工程師——而是因為他們「相信」會,而且想走在前面。

張雅晴: 完全正確。市場是被敘事驅動的,不是被 benchmark 驅動的。而現在的敘事壓倒性地是「AI 讓初階開發者變得多餘」。是否為真幾乎跟就業市場動態無關。如果 71% 的用人主管相信,職缺就會乾涸,無論 AI 實際上能不能做到。

Kent Beck: 堅定地 這正是為什麼個別工程師需要掌控自己的敘事。不要等市場來分類你。自己分類自己。如果你能用你解決的「開放問題」來闡述你的價值——而不是你寫的程式碼——你就能隔絕敘事轉移的影響。這位 25 歲的工程師應該問的是:「我解決了什麼是沒有任何 prompt 能表達的問題?」

林宏志: 打斷 Kent,你是 Kent Beck 所以這樣說很容易。這個人是一個在台灣月入可能 NT$80,000 的 25 歲年輕人。他沒有做哲學身分轉變的奢侈。他需要付房租。他需要務實的建議。

Kent Beck: 務實的建議「就是」哲學的。我 25 歲的時候,我是個nobody。我沒有品牌。我有紀律。在實作前問「為什麼」的紀律不花我任何成本——而且它複利了 30 年。這是我能給的最務實的建議:培養會複利的習慣。

Sarah Chen: 我想橋接這裡的分歧。Kent 和宏志都是對的,綜合才是重要的。「立即」的務實步驟是繼續做好你目前的工作——不要辭職、不要恐慌。但「同時」,開始把 10-20% 的精力轉向我們討論的開放問題技能。這不是二選一。這是逐漸的重新平衡。

Dr. 陳明哲: 讓我補充一個數據點。我們調查了全球 500 位前端工程師的每日時間分配。對於 1-3 年經驗的工程師,比例大約是:65% 寫新程式碼、15% 除錯、10% 開會、10% code review。對於 7 年以上的:20% 寫新程式碼、25% 除錯/調查、25% 開會和跨團隊溝通、15% code review、15% 設計和架構。注意軌跡——你越資深,花在寫程式碼的時間越少。AI 正在加速這個軌跡,而不是發明它。這位 25 歲的工程師應該把 AI 視為推動他更快上升這條曲線,而不是消滅他。

葉大師: 那是很好的重新框架,陳教授。讓我加入 TOC 對那些數據的觀點。在資深工程師的時間分配中,除了「寫新程式碼」之外的每項活動都是瓶頸活動——難以自動化、需要高度人類判斷力。AI 正在壓縮寫程式碼為主的早期職涯階段。過去會花 5 年主要在寫程式碼的工程師,現在需要在 2-3 年內發展那些更高階的技能。學習曲線是在變陡,不是在變平。

張雅晴: 這創造了一個非常具體的市場影響。過去有明確 junior → mid → senior pipeline 的公司,發現初階這一環在萎縮。他們不能再逐漸培養人了——他們需要一入職就能在中階水準運作的工程師。這就是為什麼我看到越來越多「我們要最少 2-3 年經驗」的要求,即使是那些過去是入門級的職缺。入職門檻在升高,而且是因為 AI 處理了過去用來合理化入門級職位的任務。

林宏志: 不情願地 好,我承認市場認知的問題是真的,即使我認為技術現實更加細膩。但我要記錄在案:砍初階前端職缺的公司會後悔的。2-3 年後,他們會有資深人才管道的問題,因為他們停止培養人。他們在以長期組織能力為代價優化短期成本。我看過這個——2000 年代的離岸外包就是這樣。公司把所有初階工作外包,然後在本地找不到資深工程師,因為沒人有入門的斜坡了。

葉大師: 宏志,那是一個有效的系統性擔憂,但它不幫助我們今天的 25 歲工程師。他不能等公司認識到錯誤。他需要在市場現狀上行動,而不是在市場應有的狀態上。

主持人: 在時間表上有強烈分歧,但在方向上有些收斂。讓我們投票。

投票:AI 在 3 年內能取代多少比例的初中階前端工作?

專家投票理由
葉大師60-70%封閉問題型前端任務佔初階工作的大多數
張雅晴50-60%基於招聘趨勢數據,無論如何市場都會如此行動
Kent Beck40-50%程式碼生成可以,但設計/測試判斷力不行
林宏志20-30%真實前端複雜度被嚴重低估
Sarah Chen35-45%樣板程式碼可以,但跨系統整合和 UX 判斷力不行
Dr. 陳明哲25-35%實證數據顯示 AI 仍難以處理互連的前端邏輯

Round 2: Where Is the "Moat" for a 25-Year-Old Frontend Engineer?

Moderator: We've established the threat is real, even if the panel disagrees on timing and magnitude. Now the practical question: what skills are AI-proof? What's the moat this 25-year-old should build? Kent, you mentioned design thinking — unpack that.

Kent Beck: Let me be concrete. There are three layers of frontend skill:

Layer 1 — Syntax and implementation. Can you write React components, handle state, call APIs? This is the layer AI is eating. If this is all you have, yes, you're at risk.

Layer 2 — Design and architecture. Can you decompose a complex UI into maintainable, composable components? Can you decide when to use server components vs client components? Can you design a state management approach that scales? This is harder for AI, but not impossible — AI is getting better at this.

Layer 3 — Problem definition and verification. Can you look at a vague product requirement and figure out what questions to ask? Can you push back on a design that's technically feasible but unmaintainable? Can you write acceptance criteria that actually capture what "correct" means? Can you design a test strategy that catches the bugs AI introduces? This is the moat. AI is fundamentally bad at this because it requires understanding human context, organizational dynamics, and the ability to say "no."

The 25-year-old should be sprinting toward Layer 3.

Hong-Zhi Lin: Kent, I have a genuine question about this framework. Where does deep technical knowledge fit? I'm talking about understanding the browser rendering pipeline, the compositing thread, how CSS containment affects paint operations, why forced synchronous layouts cause jank. That's not Layer 1 (it's not about writing syntax), it's not Layer 3 (it's not about defining problems). It's Layer 2, but it's a very specific kind of Layer 2 that I'd argue AI can't replicate because it requires understanding the physical constraints of the hardware.

Kent Beck: That's a fair challenge. I'd actually place that deep knowledge as a prerequisite for doing Layer 3 well. You can't define a performance spec without understanding the rendering pipeline. You can't write meaningful acceptance criteria for animation without understanding compositing. So it's not that Layer 2 doesn't matter — it's that Layer 2 in service of Layer 3 is the winning combination. Layer 2 alone is at risk. Layer 2 + Layer 3 is the moat.

Sarah Chen: immediately That's the T-shape again! The vertical bar IS deep Layer 2 knowledge. The horizontal bar IS Layer 3 breadth. They reinforce each other. The engineer who knows the rendering pipeline deeply AND can define specs, write tests, and communicate across teams — that person doesn't exist in large numbers, which means they're incredibly valuable.

Dr. Ming-Zhe Chen: There's empirical support for this. In our study, the engineers who were most resistant to AI disruption — meaning their roles weren't affected by AI tool adoption — had a specific profile: deep technical expertise in at least one specialized domain combined with demonstrable cross-functional communication skills. Neither alone was sufficient. It was the combination.

Ya-Ching Chang: nods And from a hiring perspective, that combination is unicorn-rare. I can find 50 engineers who know React well. I can find maybe 10 who can communicate clearly across teams. I can find maybe 5 who have deep expertise in performance or a11y. The engineer who has all three? I can count them on one hand in all of Taiwan. That rarity is your market value.

William Yeh: Kent just described my four accountability questions in different language. Let me restate them for the frontend context specifically:

  1. Who defines the spec? Not the Figma file — the behavioral spec. When the user's network drops mid-form-submission, what happens? When a screen reader hits this component, what does it announce? When the API returns an unexpected shape, how does the UI degrade?
  2. Who verifies correctness? The AI wrote the component. How do you know it's right? Not "it renders" — actually right. Right for edge cases, right for accessibility, right for performance on a low-end Android device.
  3. Who controls permissions? In an AI-assisted workflow, who decides what the AI agent can modify? Can it touch the authentication flow? Can it alter the payment form? If you don't have governance, you have chaos.
  4. Who bears consequences? When the AI-generated code ships a bug that costs the company money, whose name is on the incident report?

If you can own all four of these, you're irreplaceable. Not because you write code — because you govern the code.

Hong-Zhi Lin: I want to push back on this "Layer 3 only" narrative. Look, I agree that problem definition is valuable. But there's a dangerous implication here — that deep technical skills don't matter anymore. They do. Let me give you a real example. Last year, we had a performance issue — our React app was re-rendering unnecessarily on a list of 10,000 items. The AI suggested React.memo everywhere. Classic. That actually made it worse because the memo comparison function itself was expensive with our deeply nested objects. The fix was a custom virtualization strategy combined with a restructured state shape that avoided the re-render cascade entirely. That required understanding React's reconciliation algorithm, the JavaScript event loop, and the specific memory allocation patterns of our target devices. That's Layer 2 depth, and it's still a moat.

Sarah Chen: Both are moats, and I think the framing shouldn't be either/or. Let me introduce the T-shaped model we used for career development at Meta.

The vertical bar of the T is deep technical expertise in one area. For this 25-year-old, it could be: performance optimization, accessibility, design systems, complex state management, or real-time UIs. Go deep enough that you become the person people consult. Not "I can use React" deep — "I can explain why React's fiber architecture makes this specific optimization possible" deep.

The horizontal bar of the T is broad literacy across adjacent domains. This is where AI literacy fits. You don't need to train models — you need to:

  • Write effective prompts for code generation
  • Evaluate AI output critically
  • Design workflows that integrate AI tools
  • Understand the limits and failure modes of AI
  • Communicate across teams — with designers, PMs, backend engineers, data scientists

The strongest engineers I've worked with are T-shaped, not I-shaped or dash-shaped.

Dr. Ming-Zhe Chen: I want to add an empirical perspective on what constitutes a "moat." We did a study last year — published at ICSE 2025 — where we asked 200 software companies worldwide what skills they value most in engineers, given AI tool availability. The results were surprising:

  1. Debugging and root-cause analysis — 89% rated as "critical" or "essential"
  2. Requirements clarification and negotiation — 84%
  3. System design and architecture — 81%
  4. Code review and quality assessment — 78%
  5. Cross-team communication — 73%
  6. Testing strategy and test design — 71%
  7. Performance optimization — 68%
  8. Writing new code from scratch — only 34%

Notice: "writing new code" ranked last. The industry is already valuing everything around the code more than the code itself.

Ya-Ching Chang: From a hiring perspective, I'll add something concrete. I've started seeing a new pattern in job descriptions for senior frontend roles at top Taiwan tech companies. Three years ago, the requirements list was: React, TypeScript, CSS, REST APIs, Git. Today, I'm seeing:

  • "Ability to define and maintain component API contracts"
  • "Experience with AI-assisted development workflows"
  • "Proven ability to debug and resolve production incidents"
  • "Track record of cross-functional collaboration with design and product teams"
  • "Experience authoring and reviewing technical RFCs"

The code-centric requirements are shrinking. The judgment-centric requirements are growing. The 25-year-old should look at senior job descriptions not for what to learn — but for what muscle to build.

Kent Beck: jumps in And there's one moat skill nobody's mentioned yet: testing discipline. I'll die on this hill. AI writes code, but AI writes untested code. Or AI writes tests that test the implementation rather than the behavior — which is useless because when the implementation changes, the tests break even if the behavior is correct.

The engineer who can write a comprehensive test suite before the AI writes a single line of code — that engineer is the quality gatekeeper. They define what "correct" means in executable form. That's Clean Code as AI infrastructure — Jain et al.'s ICLR 2024 paper showed that well-structured, well-tested codebases have 2.3x better LLM generativity. Your tests aren't just safety nets — they're the spec that makes AI more effective.

Hong-Zhi Lin: Okay, Kent, I'll give you that one. I've seen this play out. When our codebase has good tests, AI suggestions are dramatically better because the AI can use the tests as implicit spec. When our tests are garbage — or absent — the AI is flying blind and so are we.

William Yeh: This loops back to Clean Code as infrastructure. The Jain et al. ICLR 2024 paper found that codebases with consistent naming conventions, clear module boundaries, and comprehensive tests saw significantly higher success rates in AI-generated code. So investing in code quality isn't just old-school craftsmanship — it's literally optimizing the AI's working environment. The 25-year-old should think of themselves not as "a coder" but as "the person who creates the conditions under which AI code generation succeeds."

Dr. Ming-Zhe Chen: I want to synthesize what I'm hearing into a framework. There seem to be three categories of moat skills:

  1. Upstream skills (before code is written): requirement analysis, spec writing, architecture design, test strategy definition
  2. Downstream skills (after code is written): code review, debugging, performance profiling, production incident response
  3. Lateral skills (across the process): cross-team communication, stakeholder management, mentoring, technical writing

AI is primarily automating the middle — the actual code writing. Everything upstream, downstream, and lateral remains human-dominated. The 25-year-old's strategy should be to invest in at least one skill from each category.

Sarah Chen: agrees That's an excellent taxonomy, Dr. Chen. And I'd add: the interaction between these categories is where the most valuable engineers operate. The engineer who can go from a vague PM requirement (upstream), guide an AI to produce a draft implementation (middle), review and refine that implementation for correctness and performance (downstream), and then communicate the technical decisions to the broader team (lateral) — that engineer is a full-stack problem solver, not in the "frontend + backend" sense, but in the "entire lifecycle" sense.

Hong-Zhi Lin: That's the aspiration. But let me be honest — I've been in this industry 10 years, and I still don't do all of that consistently. It's a career-long development arc, not a 12-month bootcamp. I don't want the 25-year-old to feel like they need to master everything at once.

Kent Beck: gently Nobody masters everything at once. The trick is to pick one new habit per quarter. One quarter, you start writing specs before coding. The next quarter, you start writing tests first. The quarter after that, you start giving feedback in code reviews that goes beyond "LGTM." Small, consistent improvements compound into transformational change. I've watched engineers go from mid-level to truly senior in 2-3 years with this approach. Not by doing everything — by doing one new thing consistently, then adding another.

William Yeh: Let me make this even more concrete with numbers. In my experience coaching engineering teams, the compound effect of habit stacking looks like this:

  • Quarter 1: Start writing behavioral specs. Takes 30 minutes extra per feature at first, drops to 10 minutes by end of quarter as it becomes habit. Side effect: your PM starts trusting your judgment more.
  • Quarter 2: Add TDD practice. First month feels 40% slower. By month 3, you're breaking even because debugging time drops to near-zero. Side effect: your code quality metrics improve visibly.
  • Quarter 3: Start giving substantive code reviews. Invest 20 minutes per review instead of 5. Side effect: you become the person the team consults on technical decisions.
  • Quarter 4: Begin documenting solved problems in an engineering journal. 15 minutes per day. Side effect: you have concrete evidence of your growth for performance reviews and interviews.

Total additional time investment: roughly 45-60 minutes per day. That's it. Four habits, compounding over 12 months. And because each habit reinforces the others — specs make TDD easier, TDD makes code review more insightful, code review builds your reputation, documentation makes your value visible — the compound effect is nonlinear.

Hong-Zhi Lin: surprised William, that's actually the most actionable framework I've heard all night. Small habits, low time investment, compounding returns. I wish someone had told me that at 25 instead of "read the Gang of Four book cover to cover."

Ya-Ching Chang: And from a career perspective, each of those habits creates a visible signal. Your PM notices you write specs. Your tech lead notices your code reviews are substantive. Your manager notices you're the one other engineers consult. Each signal builds toward a promotion case or a strong interview performance. The key is: make your growth visible. An invisible improvement in your engineering judgment is worth less than a visible one, because career advancement requires that others perceive your value.

Moderator: Let's vote on this one.

Vote: What is the most important moat skill?

ExpertVoteReasoning
William YehSpec definition & governanceThe four accountability questions define the new bottleneck
Ya-Ching ChangCross-functional communicationHiring data shows judgment-centric skills outweigh code skills
Kent BeckTesting discipline & design thinkingTests are executable specs that make AI more effective
Hong-Zhi LinDeep technical expertise (T-shaped vertical)You need depth to catch what AI gets wrong
Sarah ChenT-shaped skills (depth + AI literacy)Both vertical depth and horizontal breadth matter
Dr. Ming-Zhe ChenDebugging & root-cause analysisEmpirical data: 89% of companies rate this as most critical

第二回合:25 歲前端工程師的「護城河」在哪裡?

主持人: 我們已經確立威脅是真實的,即使座談成員在時間表和規模上意見不一。現在是實際問題:什麼技能是 AI-proof 的?這位 25 歲的工程師該建立什麼護城河?Kent,你提到了設計思維——展開談談。

Kent Beck: 讓我具體說。前端技能有三個層次:

第一層——語法與實作。 你能寫 React 元件、處理狀態、呼叫 API 嗎?這是 AI 正在吞食的層次。如果你只有這個,是的,你有風險。

第二層——設計與架構。 你能把複雜的 UI 拆解成可維護、可組合的元件嗎?你能決定何時用 server components vs client components 嗎?你能設計可擴展的狀態管理方案嗎?這對 AI 比較困難,但不是不可能——AI 在這方面越來越強。

第三層——問題定義與驗證。 你能看著一個模糊的產品需求,找出該問什麼問題嗎?你能推回一個技術上可行但不可維護的設計嗎?你能撰寫真正捕捉「正確」含義的驗收標準嗎?你能設計一個能抓到 AI 引入的 bug 的測試策略嗎?這就是護城河。AI 根本不擅長這個,因為它需要理解人類情境、組織動態,以及說「不」的能力。

這位 25 歲的工程師應該衝刺到第三層。

林宏志: Kent,我對這個框架有一個真心的問題。深度技術知識在哪裡?我說的是理解瀏覽器渲染管線、合成線程、CSS containment 如何影響繪製操作、為什麼強制同步佈局造成卡頓。那不是第一層(不是關於寫語法),也不是第三層(不是關於定義問題)。是第二層,但是一種「非常特定的」第二層,我會說 AI 無法複製,因為它需要理解硬體的物理限制。

Kent Beck: 那是個好挑戰。我其實會把那種深度知識放在「做好第三層的先決條件」。你不能在不理解渲染管線的情況下定義效能規格。你不能在不理解合成的情況下為動畫撰寫有意義的驗收標準。所以不是第二層不重要——而是「服務於」第三層的第二層才是致勝組合。單獨的第二層有風險。第二層 + 第三層才是護城河。

Sarah Chen: 立刻接話 那又是 T 型!垂直線「就是」深度的第二層知識。水平線「就是」第三層的廣度。它們互相強化。深度理解渲染管線「並且」能定義規格、撰寫測試、跨團隊溝通的工程師——那種人數量不多,這意味著他們價值極高。

Dr. 陳明哲: 這有實證支持。在我們的研究中,最能抵抗 AI 衝擊的工程師——意思是他們的角色不受 AI 工具採用影響——有一個特定的輪廓:在至少一個專業領域的深度技術專業,結合可展示的跨職能溝通技能。兩者單獨都不夠。是組合。

張雅晴: 點頭 從招聘角度,那個組合是獨角獸般稀有。我能找到 50 個很會 React 的工程師。大概能找到 10 個能跨團隊清楚溝通的。大概能找到 5 個在效能或 a11y 方面有深度專業的。三者都有的工程師?全台灣我一隻手數得出來。那種稀有性就是你的市場價值。

葉大師: Kent 剛用不同的語言描述了我的四個問責問題。讓我針對前端情境重新闡述:

  1. 誰來定義規格? 不是 Figma 檔案——是「行為」規格。使用者在表單送出中途斷網時會發生什麼?螢幕閱讀器碰到這個元件時會宣讀什麼?API 回傳非預期的結構時,UI 如何優雅降級?
  2. 誰來驗證正確性? AI 寫了元件。你怎麼知道它是對的?不是「它能渲染」——是真的對。對邊界案例、對無障礙、對低階 Android 裝置上的效能都是對的。
  3. 誰來控制權限? 在 AI 輔助的工作流程中,誰來決定 AI agent 能修改什麼?它能碰認證流程嗎?能改支付表單嗎?沒有治理,就只有混亂。
  4. 誰來承擔後果? 當 AI 生成的程式碼上線後出了 bug 造成公司損失,事故報告上寫的是誰的名字?

如果你能掌握這四個,你就是不可取代的。不是因為你寫程式碼——而是因為你治理程式碼。

林宏志: 我想挑戰這個「只有第三層」的敘事。我同意問題定義很有價值。但這裡有個危險的暗示——深度技術技能不再重要了。它們重要。讓我給一個真實例子。去年我們有個效能問題——我們的 React 應用在一個 10,000 項的列表上不必要地重新渲染。AI 建議到處加 React.memo。經典。這實際上讓情況更糟,因為我們深層巢狀物件的 memo 比較函數本身就很昂貴。修復方法是自訂的虛擬化策略,結合重新結構化的狀態形狀,完全避免了重新渲染的連鎖反應。這需要理解 React 的 reconciliation 演算法、JavaScript 事件迴圈,以及目標裝置的特定記憶體分配模式。那是第二層的深度,它仍然是護城河。

Sarah Chen: 兩者都是護城河,我認為框架不該是二擇一。讓我介紹我們在 Meta 用於職涯發展的 T 型模型。

T 的「垂直線」是一個領域的深度技術專業。對這位 25 歲的工程師來說,可以是:效能優化、無障礙、設計系統、複雜狀態管理,或即時 UI。深入到你成為別人會來諮詢的人。不是「我會用 React」的深——是「我能解釋為什麼 React 的 fiber 架構使這個特定優化成為可能」的深。

T 的「水平線」是跨相鄰領域的廣泛素養。AI 素養就在這裡。你不需要訓練模型——你需要:

  • 為程式碼生成撰寫有效的 prompt
  • 批判性地評估 AI 輸出
  • 設計整合 AI 工具的工作流程
  • 理解 AI 的限制和失敗模式
  • 跨團隊溝通——與設計師、PM、後端工程師、資料科學家

我共事過最強的工程師是 T 型的,不是 I 型或一字型的。

Dr. 陳明哲: 我想對什麼構成「護城河」加入實證觀點。我們去年做了一項研究——發表在 ICSE 2025——詢問全球 200 家軟體公司,在「AI 工具可用」的前提下,他們最重視工程師的什麼技能。結果令人意外:

  1. 除錯與根因分析 —— 89% 評為「關鍵」或「必備」
  2. 需求澄清與協商 —— 84%
  3. 系統設計與架構 —— 81%
  4. Code review 與品質評估 —— 78%
  5. 跨團隊溝通 —— 73%
  6. 測試策略與測試設計 —— 71%
  7. 效能優化 —— 68%
  8. 從頭寫新程式碼 —— 僅 34%

注意:「寫新程式碼」排最後。產業已經在重視程式碼「周圍」的一切,勝過程式碼本身。

張雅晴: 從招聘角度,我補充具體的東西。我開始在台灣頂尖科技公司的資深前端職缺中看到新模式。三年前,需求清單是:React、TypeScript、CSS、REST APIs、Git。今天,我看到的是:

  • 「能定義並維護元件 API 契約」
  • 「有 AI 輔助開發工作流程經驗」
  • 「有 debug 和解決生產環境事故的實證能力」
  • 「有與設計和產品團隊跨職能協作的紀錄」
  • 「有撰寫和審查技術 RFC 的經驗」

以程式碼為中心的需求在縮減。以判斷力為中心的需求在增長。這位 25 歲的工程師應該看資深職位描述,不是為了知道該學什麼——而是知道該鍛鍊什麼「肌肉」。

Kent Beck: 插入 還有一個護城河技能沒人提到:測試紀律。我願意為這件事奮戰到底。AI 寫程式碼,但 AI 寫的是「未測試的」程式碼。或者 AI 寫的測試是測試實作而非行為——這毫無用處,因為當實作改變時,即使行為正確,測試也會壞掉。

能在 AI 寫任何一行程式碼「之前」就撰寫全面測試套件的工程師——那位工程師就是品質守門員。他們用可執行的形式定義「正確」的意思。這就是 Clean Code 作為 AI 基礎設施—— Jain 等人 ICLR 2024 的論文顯示,結構良好、測試充分的程式碼庫有 2.3 倍更好的 LLM 生成能力。你的測試不只是安全網——它們是讓 AI 更有效的「規格」。

林宏志: 好吧 Kent,這點我同意你。我見過這個現象。當我們的程式碼庫有好的測試時,AI 的建議明顯更好,因為 AI 能把測試當作隱式規格使用。當我們的測試很爛——或根本沒有——AI 就是盲飛,我們也是。

葉大師: 這回到了 Clean Code 作為基礎設施。Jain 等人 ICLR 2024 的論文發現,具有一致命名慣例、清晰模組邊界和全面測試的程式碼庫,在 AI 生成程式碼的成功率上顯著更高。所以投資程式碼品質不只是老派工匠精神——它是在優化 AI 的工作環境。這位 25 歲的工程師應該把自己想成不是「寫程式的人」,而是「創造讓 AI 程式碼生成成功的條件的人」。

Dr. 陳明哲: 我想把大家說的綜合成一個框架。護城河技能似乎有三個類別:

  1. 上游技能(程式碼撰寫之前):需求分析、規格撰寫、架構設計、測試策略定義
  2. 下游技能(程式碼撰寫之後):code review、除錯、效能分析、生產環境事故回應
  3. 橫向技能(貫穿整個流程):跨團隊溝通、利益關係人管理、帶人、技術寫作

AI 主要在自動化「中間」——實際的程式碼撰寫。上游、下游和橫向的一切仍然由人類主導。這位 25 歲工程師的策略應該是至少從每個類別投資一項技能。

Sarah Chen: 同意 那是很好的分類法,陳教授。我再補充:這些類別之間的「交互作用」才是最有價值的工程師所處的位置。能從模糊的 PM 需求(上游)出發,引導 AI 產出草稿實作(中間),審查和精煉該實作的正確性和效能(下游),然後向更廣泛的團隊溝通技術決策(橫向)的工程師——那位工程師是全端問題解決者,不是「前端 + 後端」的意義,而是「整個生命週期」的意義。

林宏志: 那是理想。但讓我誠實說——我在這個產業 10 年了,我自己都還沒能持續做到所有這些。這是一個職涯長度的發展弧,不是一個 12 個月的 bootcamp。我不想讓這位 25 歲的工程師覺得他需要一次精通所有東西。

Kent Beck: 溫和地 沒有人一次精通所有東西。竅門是每季挑選一個新習慣。一季,你開始在寫程式碼前寫規格。下一季,你開始先寫測試。再下一季,你在 code review 中開始給超越「LGTM」的回饋。小的、一致的改善會複利成變革性的改變。我看過工程師用這種方法在 2-3 年內從中階到真正資深。不是靠做所有事——是靠持續做一件新事,然後再加一件。

葉大師: 讓我用數字講得更具體。在我指導工程團隊的經驗中,習慣疊加的複利效應看起來是這樣的:

  • 第 1 季: 開始撰寫行為規格。一開始每個功能多花 30 分鐘,到季末變成習慣後降到 10 分鐘。副作用:你的 PM 開始更信任你的判斷。
  • 第 2 季: 加入 TDD 練習。第一個月感覺慢 40%。到第三個月,你打平了,因為 debug 時間降到接近零。副作用:你的程式碼品質指標明顯改善。
  • 第 3 季: 開始給有實質內容的 code review。每個 review 投入 20 分鐘而不是 5 分鐘。副作用:你成為團隊在技術決策上諮詢的人。
  • 第 4 季: 開始在工程日誌中記錄解決的問題。每天 15 分鐘。副作用:你有具體的成長證據用於績效考核和面試。

額外時間投資總計:大約每天 45-60 分鐘。就是這樣。四個習慣,複利 12 個月。因為每個習慣強化其他的——規格讓 TDD 更容易、TDD 讓 code review 更有洞察力、code review 建立你的聲譽、記錄讓你的價值可見——複利效應是非線性的。

林宏志: 驚訝 葉大師,這其實是今晚我聽到最可行動的框架。小習慣、低時間投資、複利回報。我希望有人在我 25 歲時告訴我這個,而不是「把 Gang of Four 的書從頭讀到尾」。

張雅晴: 而且從職涯角度,每個習慣都創造一個可見的信號。你的 PM 注意到你寫規格。你的 tech lead 注意到你的 code review 有實質內容。你的主管注意到你是其他工程師來諮詢的人。每個信號都累積成升遷依據或強勢的面試表現。關鍵是:讓你的成長「可見」。工程判斷力的隱形提升,其價值低於可見的提升,因為職涯發展需要別人感知到你的價值。

主持人: 來投票吧。

投票:最重要的護城河技能是什麼?

專家投票理由
葉大師規格定義與治理四個問責問題定義了新瓶頸
張雅晴跨職能溝通招聘數據顯示判斷力技能超越程式碼技能
Kent Beck測試紀律與設計思維測試是可執行的規格,讓 AI 更有效
林宏志深度技術專業(T 型的垂直線)你需要深度來抓到 AI 搞錯的東西
Sarah ChenT 型技能(深度 + AI 素養)垂直深度和水平廣度都重要
Dr. 陳明哲除錯與根因分析實證數據:89% 的公司評為最關鍵

Round 3: The "Mid-Level Self-Assessment" Trap and How to Break Through

Moderator: Let's address the elephant in the room. This engineer has a Senior title but self-assesses as mid-level. That gap is telling. What does it mean, and how do they close it? William, you have a framework for this.

William Yeh: I do, and it's directly from the closed vs open problem distinction. Here's a diagnostic test. Think about your last ten work tasks. How many of them were:

Closed problems — Someone gave you a clear spec (Figma design, API contract, Jira ticket with acceptance criteria), and you implemented it. Input was defined, output was defined, you connected the dots. You might have done it well — fast, clean, bug-free — but the problem space was bounded.

Open problems — The problem itself was unclear. Maybe the PM said "users are dropping off during checkout" and you had to figure out what the actual issue was. Maybe you had to decide on the architecture for a new feature with no precedent in the codebase. Maybe you mediated a disagreement between design and backend about what was technically feasible.

If 80%+ of your work is closed problems, you're functionally mid-level regardless of your title. The title inflation is real — especially in Taiwan where companies hand out "Senior" at 2-3 years of experience to retain talent in a tight market.

Ya-Ching Chang: immediately William just described 70% of the "Seniors" I interview. They're excellent implementers. They can take a Jira ticket and ship it reliably. But when I ask "Tell me about a time you pushed back on a product requirement" or "Describe a technical decision you made that affected other teams" — silence. Not because they're bad engineers — because they've never been put in that position, or never seized the opportunity.

Here's the uncomfortable truth: in many Taiwanese companies, "Senior" means "has been here 3+ years and doesn't cause problems." It doesn't mean "drives technical decisions." The gap between that title and genuine senior capability is the gap this engineer is feeling.

Hong-Zhi Lin: Hold on. I want to defend this person a little. The self-assessment itself shows maturity. Most engineers with inflated titles don't know they're inflated. The fact that this 25-year-old recognizes the gap means they already have the self-awareness that's a prerequisite for growth. That's more than I can say for a lot of engineers with 10 years of experience.

Sarah Chen: agrees At Meta, we had a framework for this. We called it the "impact radius." Junior engineers have an impact radius of themselves — their own code, their own tickets. Mid-level engineers have an impact radius of their team — they influence team decisions, review others' code meaningfully, and own subsystems. Senior engineers have an impact radius of multiple teams — they drive cross-team technical decisions, set standards, and mentor. Staff engineers influence the entire organization.

The question isn't "are you writing senior-level code?" The question is "what's your impact radius?" If you're writing excellent code but only affecting your own tickets, you're mid-level. If your decisions are shaping how the entire team works, you're senior.

Kent Beck: I'd add another dimension: risk tolerance in decision-making. Mid-level engineers optimize for being correct — they want to make the right choice. Senior engineers optimize for being quickly reversible — they know that most decisions can be undone, so they decide fast, ship fast, and iterate. The shift from "I need to be right" to "I need to decide and learn" is one of the biggest mental model changes in engineering growth.

And AI makes this worse, by the way. Junior and mid-level engineers now have AI as a crutch for decision-making: "Let me ask Claude what the best approach is." But Claude doesn't know your team's context, your deployment pipeline, your users' specific needs. Outsourcing decisions to AI is the opposite of growing into senior capability.

Dr. Ming-Zhe Chen: There's a cognitive science angle here. The Dreyfus model of skill acquisition describes five stages: novice, advanced beginner, competent, proficient, and expert. At the novice-to-competent stages, people follow rules and recognize patterns. At the proficient-to-expert stages, people develop intuition — they can immediately sense that something is off without being able to articulate why. This intuition is built through years of encountering problems, making mistakes, and building mental models.

AI tools threaten this development process. If an AI solves the problem before you struggle with it, you never build the intuition. It's like using a calculator before learning mental arithmetic — you can get answers, but you can't estimate whether the answer makes sense. The "mid-level trap" may actually get worse as AI tools become more prevalent, because fewer engineers will go through the struggle required to develop genuine expertise.

William Yeh: emphatically And this connects to something I feel strongly about. The "mid-level trap" is not just about skills — it's about identity. Many engineers identify as "a person who writes code." When you ask them what they do, they say "I'm a React developer" or "I build frontends." That identity is tied to implementation.

A senior engineer's identity is "I solve business problems using technology." Notice the difference? The code is a tool, not the identity. The 25-year-old needs an identity shift as much as a skill shift. When someone asks "what do you do?", the answer should be "I make sure our users can accomplish their goals through our product" — not "I write React components."

Hong-Zhi Lin: pushes back That sounds inspirational, but it's disconnected from the reality of most engineering jobs. Most companies don't give their engineers space to "solve business problems." They give them Jira tickets. The engineer doesn't have the political capital or organizational structure to suddenly become a "spec owner." The advice "become someone who defines problems" assumes an organizational environment that supports it. Many companies — especially in Taiwan's traditional tech sector — don't.

Ya-Ching Chang: Hong-Zhi is making a critical point. In my experience, roughly 40% of Taiwanese tech companies have an engineering culture that supports engineer-driven problem definition. In the other 60%, engineers are expected to implement what product managers specify. If you're in the 60%, your options are:

  1. Change the culture from within — risky, slow, and often fails
  2. Move to a company in the 40% — the faster path
  3. Build the skills externally — open source, side projects, communities

I'd recommend option 2 or 3 for this 25-year-old. Life is too short to fight organizational culture when you can vote with your feet.

Kent Beck: sharply I'll disagree with Ya-Ching slightly. You can always practice problem definition skills, even in a Jira-ticket-driven culture. Here's how:

When you get a ticket that says "Add a dropdown filter to the user list page," don't just implement it. Ask: "What problem is this solving? Have we validated that users want to filter by this dimension? What happens if the list has 50,000 items — does a dropdown still make sense, or should it be a search field? What about keyboard accessibility?" Even if the PM shuts you down, the act of asking builds the muscle. And sometimes, they listen. And when they listen once, they'll come to you again.

Sarah Chen: That's exactly right. The micro-moments of pushing back, asking why, suggesting alternatives — those accumulate into a reputation. At some point, the PM starts coming to you before writing the ticket, saying "I'm thinking about adding this feature — what do you think?" That's when you know you've crossed the threshold.

William Yeh: Let me put a number on this. In my consulting work, I've seen engineers go from "ticket implementer" to "problem co-definer" in 6-12 months if they're deliberate about it. The key habits:

  1. For every ticket, write down the business problem it's solving (not the technical task)
  2. Before implementing, write acceptance criteria and get PM sign-off
  3. After implementing, check back: did the acceptance criteria actually capture what was needed?
  4. When you find gaps, document them and share with the team

This loop builds the spec-definition muscle. It also builds trust with PMs, because you're showing that you care about the outcome, not just the output.

Dr. Ming-Zhe Chen: I want to add something important about the mid-level trap that's AI-specific. In our research, we've identified a phenomenon we call the "competence illusion." Engineers who use AI tools heavily appear more productive on surface metrics — features shipped, PRs merged, code volume. Their managers often rate them higher in performance reviews. But when we administered deep technical assessments — novel debugging scenarios, system design questions, code review challenges — these same engineers scored lower than peers who used AI tools less.

The implication is alarming: AI can make you look more senior without making you be more senior. You ship more features, but your engineering judgment isn't deepening. And when you hit a genuinely novel problem — the kind that separates senior from mid-level — the gap becomes painfully visible.

Hong-Zhi Lin: That is terrifying, actually. So AI can actively widen the mid-level trap by masking the gap with productivity metrics?

Dr. Ming-Zhe Chen: Exactly. It's a form of Goodhart's Law — "when a measure becomes a target, it ceases to be a good measure." If we measure engineering seniority by output volume, and AI inflates output volume, then output volume no longer measures seniority. Companies need new assessment frameworks for the AI era, and individual engineers need to be honest with themselves about whether their productivity gains come from genuine skill or from AI subsidy.

Sarah Chen: At Meta, we addressed this by introducing "depth reviews" — quarterly technical deep-dives where engineers explain a complex decision they made, walk through their debugging process for a production incident, or present an architecture they designed. These reviews specifically assessed the reasoning process, not the output. It's harder to fake that with AI, because the reviewers ask probing follow-up questions that require genuine understanding.

Ya-Ching Chang: From a hiring perspective, I'm already adjusting my screening process for this. I now include a 30-minute live debugging session in our interviews. I give candidates a buggy React component and watch them diagnose it in real time. The engineers who rely heavily on AI tools struggle here — they're not used to thinking through problems step by step. The engineers who've built genuine debugging intuition shine. That 30-minute session tells me more about their real level than anything on their resume.

William Yeh: And this is why I keep saying: the identity shift matters. If you identify as "the person who ships features," AI makes you look good temporarily but hollows out your foundation. If you identify as "the person who understands why the system works," AI can't hollow that out — because understanding is built through deliberate struggle, not through prompt engineering.

Kent Beck: strongly One specific action for the 25-year-old on this: when AI writes code for you, don't just use it. Read every line and explain it to yourself. If you can't explain why the AI chose this approach, you don't understand it. And if you don't understand it, you can't debug it when it breaks at 3 AM. The discipline of reading and explaining AI-generated code is the bridge between using AI and learning from AI.

Moderator: Let's vote on the transition question.

Vote: What's the most critical shift from mid-level to truly senior?

ExpertVoteReasoning
William YehFrom closed-problem solver to open-problem ownerThe problem-type shift defines the career tier
Ya-Ching ChangFrom individual contributor to cross-functional influencerImpact radius determines real seniority
Kent BeckFrom "I need to be right" to "I need to decide and learn"Decision-making speed and reversibility
Hong-Zhi LinFrom implementer to someone who pushes back with dataTechnical depth used to challenge assumptions
Sarah ChenFrom ticket executor to product co-ownerExpanding the impact radius beyond your own code
Dr. Ming-Zhe ChenFrom rule-follower to intuition-builderDreyfus model: genuine expertise requires struggle

第三回合:「中階自評」的陷阱與突破

主持人: 讓我們直面房間裡的大象。這位工程師有 Senior 職稱但自評中階。這個落差很有意義。它代表什麼,要怎麼填補?葉大師,你有框架。

葉大師: 是的,直接來自封閉 vs 開放問題的區分。這裡有個診斷測試。回想你最近十個工作任務。其中有多少是:

封閉問題——有人給你明確的規格(Figma 設計、API 契約、帶有驗收標準的 Jira ticket),你實作它。輸入已定義,輸出已定義,你連接中間的點。你可能做得很好——快速、乾淨、沒有 bug——但問題空間是有邊界的。

開放問題——問題本身不清楚。也許 PM 說「使用者在結帳時大量流失」,而你得弄清楚實際問題是什麼。也許你需要為一個程式碼庫中沒有先例的新功能決定架構。也許你調解了設計和後端之間關於技術可行性的分歧。

如果你的工作有 80% 以上是封閉問題,那不管你的職稱是什麼,你功能上就是中階。職稱通膨是真的——尤其在台灣,公司在 2-3 年經驗時就給「Senior」來留住人才,因為市場競爭激烈。

張雅晴: 立刻接話 葉大師剛描述了我面試的 70% 的「Senior」。他們是優秀的實作者。他們能接一張 Jira ticket 然後可靠地交付。但當我問「跟我說一次你推回產品需求的經歷」或「描述一個你做的影響到其他團隊的技術決策」——沉默。不是因為他們是差勁的工程師——而是因為他們從未被放到那個位置,或者從未抓住機會。

不舒服的真相是:在很多台灣公司裡,「Senior」意味著「待了 3 年以上而且不惹麻煩」。它不代表「驅動技術決策」。那個職稱和真正資深能力之間的落差,就是這位工程師感受到的落差。

林宏志: 等等。我想為這個人辯護一下。自我評估本身就顯示成熟度。大多數職稱膨脹的工程師「不知道」自己膨脹了。這位 25 歲的工程師能認知到落差,代表他已經具備成長的先決條件——自我覺察。這比我認識的很多 10 年經驗的工程師都強。

Sarah Chen: 同意 在 Meta,我們有個框架叫「影響半徑」。初階工程師的影響半徑是自己——自己的程式碼、自己的 ticket。中階工程師的影響半徑是團隊——他們影響團隊決策、有意義地 review 別人的程式碼、負責子系統。資深工程師的影響半徑是多個團隊——他們驅動跨團隊的技術決策、設定標準、帶人。Staff 工程師影響整個組織。

問題不是「你在寫資深級的程式碼嗎?」問題是「你的影響半徑有多大?」如果你寫的程式碼很優秀但只影響自己的 ticket,你就是中階。如果你的決策在塑造整個團隊的工作方式,你就是資深。

Kent Beck: 我再加一個維度:決策中的風險容忍度。中階工程師優化「正確性」——他們想做出對的選擇。資深工程師優化「快速可逆性」——他們知道大多數決策可以撤回,所以他們決策快、交付快、然後迭代。從「我需要正確」到「我需要決策然後學習」的轉變,是工程師成長中最大的心智模型變化之一。

而且 AI 讓這個問題更嚴重。初中階工程師現在把 AI 當作決策拐杖:「讓我問 Claude 什麼方法最好。」但 Claude 不知道你團隊的脈絡、你的部署流程、你使用者的具體需求。把決策外包給 AI,跟成長為資深能力是反方向的。

Dr. 陳明哲: 這裡有個認知科學的角度。Dreyfus 的技能習得模型描述了五個階段:新手、進階初學者、勝任者、精通者和專家。在新手到勝任者階段,人們遵循規則和識別模式。在精通者到專家階段,人們發展出「直覺」——他們能立即感知到某些東西不對勁,而無法清楚說出為什麼。這種直覺是透過多年遇到問題、犯錯和建立心智模型而累積的。

AI 工具威脅了這個發展過程。如果 AI 在你掙扎之前就解決了問題,你永遠不會建立直覺。這就像在學會心算之前就用計算機——你能得到答案,但你無法估計答案是否合理。「中階陷阱」可能會隨著 AI 工具的普及而更加嚴重,因為更少的工程師會經歷發展真正專業所需的掙扎。

葉大師: 強調地 這連結到一個我很有感觸的事。「中階陷阱」不只是關於技能——是關於「身分認同」。很多工程師認同自己是「寫程式的人」。當你問他們做什麼,他們說「我是 React 開發者」或「我做前端」。那個身分認同綁定在實作上。

資深工程師的身分認同是「我用技術解決商業問題」。注意差異了嗎?程式碼是工具,不是身分認同。這位 25 歲的工程師需要的身分認同轉變不亞於技能轉變。當有人問「你做什麼」時,答案應該是「我確保我們的使用者能透過我們的產品達成目標」——而不是「我寫 React 元件」。

林宏志: 反駁 那聽起來很勵志,但跟大多數工程師工作的現實脫節。大多數公司不會給工程師空間去「解決商業問題」。他們給你 Jira ticket。工程師沒有政治資本或組織結構來突然變成「規格擁有者」。「成為定義問題的人」這個建議,假設了一個支持它的組織環境。很多公司——尤其是台灣傳統科技業——並不支持。

張雅晴: 宏志提出了一個關鍵觀點。以我的經驗,大約 40% 的台灣科技公司有支持工程師驅動問題定義的工程文化。另外 60% 期望工程師實作產品經理指定的東西。如果你在那 60% 裡,你的選項是:

  1. 從內部改變文化 —— 風險大、緩慢、而且常常失敗
  2. 跳到那 40% 的公司 —— 更快的路徑
  3. 在外部建立技能 —— 開源、side project、社群

我會建議這位 25 歲的工程師選擇 2 或 3。人生太短,不該在能用腳投票的時候去對抗組織文化。

Kent Beck: 銳利地 我跟雅晴的意見有點不同。即使在 Jira-ticket 驅動的文化中,你也能練習問題定義的技能。方法是:

當你接到一張 ticket 說「在使用者列表頁面加一個下拉篩選器」時,不要只是實作它。問:「這在解決什麼問題?我們有驗證過使用者想用這個維度來篩選嗎?如果列表有 50,000 項——下拉選單還合理嗎,還是應該用搜尋欄位?鍵盤無障礙呢?」即使 PM 拒絕你,「提問的行為」本身就在鍛鍊肌肉。而且有時候,他們會聽。當他們聽了一次,下次他們就會再來找你。

Sarah Chen: 完全正確。那些推回、問為什麼、提出替代方案的微小時刻——它們累積成聲譽。到了某個時候,PM 會在寫 ticket「之前」就來找你,說「我在想加這個功能——你覺得怎樣?」那就是你知道自己跨過了門檻。

葉大師: 讓我給個數字。在我的顧問工作中,我見過工程師在 6-12 個月內從「ticket 實作者」變成「問題共同定義者」,如果他們刻意去做。關鍵習慣:

  1. 對每張 ticket,寫下它在解決的「商業」問題(不是技術任務)
  2. 實作前,撰寫驗收標準並取得 PM 簽核
  3. 實作後,回頭檢查:驗收標準是否真的捕捉了需要的東西?
  4. 當你發現差距時,記錄下來並分享給團隊

這個循環建立規格定義的肌肉。它也建立與 PM 的信任,因為你展示了你關心「結果」,而不只是產出。

Dr. 陳明哲: 我想補充一個跟 AI 特別相關的中階陷阱現象。在我們的研究中,我們發現了一個叫做「能力幻覺」的現象。大量使用 AI 工具的工程師在表面指標上看起來更有生產力——交付的功能、合併的 PR、程式碼量。他們的主管通常在績效考核中給他們更高的評分。但當我們進行深度技術評估——新穎的除錯場景、系統設計問題、code review 挑戰——這些同樣的工程師得分低於較少使用 AI 工具的同儕。

這個暗示令人警覺:AI 可以讓你「看起來」更資深,卻不會讓你「真的」更資深。你交付更多功能,但你的工程判斷力沒有加深。而當你碰到一個真正新穎的問題——那種區分資深和中階的問題——差距就會痛苦地顯現。

林宏志: 那真的很可怕。所以 AI 可以透過用生產力指標掩蓋差距來積極地加寬中階陷阱?

Dr. 陳明哲: 沒錯。這是 Goodhart 定律的一種形式——「當一個指標變成目標,它就不再是好的指標」。如果我們用產出量來衡量工程資深度,而 AI 膨脹了產出量,那產出量就不再衡量資深度了。公司需要 AI 時代的新評估框架,而個別工程師需要對自己誠實——他們的生產力提升來自真正的技能,還是來自 AI 的補貼。

Sarah Chen: 在 Meta,我們透過引入「深度評審」來解決這個問題——每季的技術深潛,工程師解釋他們做的一個複雜決策、走過一個生產環境事故的除錯過程,或展示他們設計的架構。這些評審特別評估「推理過程」,而不是產出。用 AI 偽造這個比較難,因為評審者會問深入的追問,需要真正的理解。

張雅晴: 從招聘角度,我已經在調整篩選流程。我現在在面試中加入 30 分鐘的即時除錯環節。我給候選人一個有 bug 的 React 元件,看他們即時診斷。大量依賴 AI 工具的工程師在這裡很掙扎——他們不習慣一步步思考問題。建立了真正除錯直覺的工程師則表現出色。那 30 分鐘的環節告訴我的比履歷上任何東西都多。

葉大師: 這就是為什麼我一直說:身分認同轉變很重要。如果你認同自己是「交付功能的人」,AI 讓你暫時看起來很好但掏空你的基礎。如果你認同自己是「理解系統為什麼運作的人」,AI 無法掏空那個——因為理解是透過刻意掙扎建立的,不是透過 prompt engineering。

Kent Beck: 強烈地 一個具體的行動給這位 25 歲的工程師:當 AI 為你寫程式碼時,不要只是用它。讀每一行並向自己解釋。 如果你無法解釋為什麼 AI 選擇這個方法,你就不理解它。如果你不理解它,凌晨三點壞掉時你就無法 debug。閱讀和解釋 AI 生成程式碼的紀律,是使用 AI 和從 AI 學習之間的橋樑。

主持人: 來投票轉型問題吧。

投票:從中階到真正資深,最關鍵的轉變是什麼?

專家投票理由
葉大師從封閉問題解決者到開放問題擁有者問題類型的轉變定義了職涯層級
張雅晴從個人貢獻者到跨職能影響者影響半徑決定了真正的資深程度
Kent Beck從「我需要正確」到「我需要決策然後學習」決策速度和可逆性
林宏志從實作者到用數據推回的人技術深度用來挑戰假設
Sarah Chen從 ticket 執行者到產品共同擁有者擴展影響半徑到自己程式碼之外
Dr. 陳明哲從規則追隨者到直覺建立者Dreyfus 模型:真正的專業需要掙扎

Round 4: Concrete Action Plan — What to Do in the Next 12 Months

Moderator: Theory is great, but this 25-year-old needs a concrete plan. Each expert, give your top 3 recommendations for the next 12 months. Then we'll debate priorities. William, start.

William Yeh: My three, in order of priority:

1. Become the spec owner on your team (months 1-3). For every feature you implement, write the behavioral spec before you write any code. Not just acceptance criteria — edge cases, error states, accessibility requirements, performance budgets. Get PM sign-off. After shipping, track whether the spec was sufficient. Within 3 months, you'll be the person PMs consult before writing tickets.

2. Build a verification practice (months 3-6). Start writing tests not just for your code but for the contract between your code and the system. Integration tests, E2E tests, visual regression tests. Make yourself the person who catches what AI misses. Learn to write test strategies — documents that say "here's how we'll know this feature is correct" — not just individual test cases.

3. Learn AI-assisted development properly (months 6-12). Not "use Copilot sometimes." I mean: learn prompt engineering for code generation, learn how to structure your codebase so AI tools work better (Clean Code as infrastructure), and learn to evaluate AI output critically. The goal is to be the person who uses AI as a 10x lever, not a crutch.

Kent Beck: Mine overlap with William's but with different emphasis:

1. Master TDD — seriously, not lip service (months 1-4). Write tests first. Not sometimes. Every time. For the first month, it'll feel slow. By month 3, you'll be faster than you were without TDD because you'll spend zero time debugging. More importantly, you'll develop the habit of defining "correct" before implementing — which is the exact skill that makes you irreplaceable in an AI-assisted workflow.

2. Study design patterns through refactoring (months 2-8). Don't read a design patterns book — that's theory. Instead, take a messy part of your codebase and refactor it. Apply one pattern. See what improves, what doesn't. Do this monthly. The goal isn't to memorize patterns — it's to develop the taste for good design. Taste is what separates "write me a component" from "write me a component that fits elegantly into this architecture."

3. Contribute to open source (months 4-12). Not to build your resume — to practice working in ambiguous, multi-stakeholder environments. Open source maintainers deal with vague feature requests, conflicting opinions, backward compatibility, and zero organizational authority. These are all open-problem skills. Pick a frontend library you use daily. Start with documentation fixes, then bug fixes, then small features. By month 12, you'll have practiced cross-team collaboration without needing to change jobs.

Sarah Chen: The T-shaped approach:

1. Pick your vertical and go deep (months 1-6). Choose ONE area: accessibility, performance, design systems, complex state management, or real-time UIs. Don't spread thin. Read the specification (yes, the W3C spec for a11y, the browser engine source for performance). Build expertise that goes deeper than "I used this library." The goal: become the person on your team that others consult for that specific area. A 25-year-old who is the definitive a11y expert on their team has a moat that AI can't touch for years.

2. Build AI literacy through practice (months 3-9). Set up a structured experiment. Every week, take a real task from your backlog and try to complete it using AI tools first. Document: what worked, what didn't, where you had to intervene. Track your accuracy rate over time. After 6 months, you'll have an empirical understanding of AI's capabilities and limits — not just opinions. This makes you the team's go-to person for AI workflow integration.

3. Expand your horizontal bar (months 6-12). Take one cross-functional skill and invest in it. Options: learn basic UX research methods, shadow a PM for a sprint, learn observability/monitoring, or contribute to your team's CI/CD pipeline. The goal isn't mastery — it's conversational fluency. When the PM says "we need better error tracking," you should be able to say "I'll set up Sentry with custom error boundaries" — not "that's DevOps's job."

Hong-Zhi Lin: I have a different angle. I think the other recommendations are good but they assume this person is in a supportive environment. Let me give the pragmatic version:

1. Build a portfolio of solved hard problems (months 1-12, ongoing). Start a private engineering journal. For every non-trivial problem you solve, document: what the problem was, why it was hard, what approaches you tried, what worked, and what you learned. This is your ammunition for interviews, promotions, and self-assessment. It's also the raw material for blog posts, conference talks, and community contributions. Most engineers can't articulate their own value because they don't track it.

2. Learn the browser deeply, not just the framework (months 1-8). React, Vue, Next.js — these are all abstractions. Learn what's underneath. How does the browser render a page? What triggers layout recalculation? How does the compositing layer work? What's the difference between a forced synchronous layout and an asynchronous one? This depth makes you framework-independent. When the next framework wave hits — and it will — you'll adapt in weeks, not months. And AI can't fake this depth of understanding.

3. Build your network intentionally (months 1-12). Join Taiwan's frontend community meetups. Attend COSCUP, MOPCON, and JSDC. Not to collect business cards — to build relationships with engineers who are solving open problems. Your network determines your career trajectory more than your code. Specifically: find 2-3 senior engineers you respect and build mentorship relationships. Ask them how they made the mid-to-senior transition.

Ya-Ching Chang: From the HR perspective:

1. Fix your resume and LinkedIn NOW (month 1). This sounds basic, but 80% of engineers I screen have resumes that list technologies ("React, TypeScript, Next.js") instead of impact ("Reduced page load time by 40% by implementing custom virtualization, affecting 2M monthly users"). Rewrite every bullet point in PAR format: Problem, Action, Result. This single change can increase interview callbacks by 50%.

2. Get experience with AI in your workflow and document it (months 1-6). Companies are actively seeking engineers who can demonstrate AI-assisted development experience. Not "I used Copilot" — "I designed a workflow where AI handled boilerplate generation while I focused on architecture and testing, which increased our team's throughput by 30%." That's a concrete, quantified experience that hiring managers want.

3. Target your next role strategically (months 6-12). Don't just apply randomly. Identify 10-15 companies that have the engineering culture Ya-Ching described — where engineers co-define problems. Research their tech stack, read their engineering blog, find connections on LinkedIn. Apply with a tailored pitch that shows you understand their specific challenges. One targeted application is worth fifty blind ones.

Dr. Ming-Zhe Chen: The academic perspective on skill development:

1. Practice deliberate debugging (months 1-6). Find a complex open-source frontend project — Next.js core, React itself, or a major component library. Set up the development environment and start investigating open issues. Don't try to fix them at first — just try to understand the root cause. This builds the debugging and root-cause analysis skill that 89% of companies rated as critical. It also builds your ability to read and understand complex code — which is exactly the verification skill needed in an AI-assisted world.

2. Study the fundamentals that AI can't shortcut (months 3-9). I mean: operating systems concepts (event loops, concurrency), networking (HTTP/2, WebSockets, CDN behavior), and data structures. Not for interview prep — for genuine understanding. When an AI generates code that has a subtle performance issue because it doesn't understand how the browser event loop handles microtasks vs macrotasks, you'll be the person who catches it.

3. Develop your own AI evaluation framework (months 6-12). Use what you learn from debugging and fundamentals to build a systematic way of evaluating AI-generated code. What do you check first? What are the common failure modes? How do you validate correctness beyond "it compiles and runs"? Document this framework and share it with your team. You become the quality gatekeeper — the human in the loop who makes AI outputs trustworthy.

Moderator: Lots of recommendations. The key tension seems to be: AI tools first, or fundamentals first?

William Yeh: Fundamentals first, but not in a vacuum. You need both — but if you learn AI tools without fundamentals, you're building on sand. If you learn fundamentals without AI literacy, you're leaving a 10x productivity multiplier on the table.

Kent Beck: firmly Fundamentals. Always fundamentals. The tools change every 18 months. The principles endure. I wrote the first XP book in 1999. The tools are completely different now. The principles — feedback, simplicity, communication, courage — are exactly the same. Invest in principles.

Sarah Chen: I'll push back on Kent slightly. In 2026, not learning AI tools is like not learning Git in 2012. Yes, fundamentals matter more. But if you can't use the tools your industry runs on, you're handicapping yourself unnecessarily. Months 1-3 should include some structured AI tool learning, even as you go deep on fundamentals.

Hong-Zhi Lin: The pragmatic answer is: whatever you learn, ship something with it. Theory without practice is academic. Learn TDD by actually TDD-ing a feature. Learn AI tools by actually building something with them. Learn a11y by actually auditing a real page. The 25-year-old doesn't need a study plan — they need a building plan.

Dr. Ming-Zhe Chen: The research supports a structured approach — interleave fundamentals and tools. Cognitive science shows that interleaved practice (alternating between different types of learning) produces better long-term retention than blocked practice (mastering one thing at a time). So: one week deep on React internals, the next week practicing AI-assisted development, the next week studying testing patterns. The variety keeps your brain engaged and helps you make cross-domain connections.

Ya-Ching Chang: jumps in Can I make a controversial recommendation? I think the 25-year-old should also consider lateral moves — not just vertical growth. Here's what I mean. If you're in a company where frontend engineering is viewed as a cost center — "just make the screens look right" — no amount of skill development will change how you're valued. But if you move to a company where frontend engineering is a profit center — SaaS companies where the UI IS the product — your same skills are suddenly 3x more valuable.

I've seen engineers get 40-60% salary jumps by moving from an agency or traditional company to a product company where frontend is core to the business. Same skills, different context, dramatically different compensation and growth potential.

William Yeh: Ya-Ching makes an excellent point that I want to connect to TOC. In the Theory of Constraints, we say "optimize the bottleneck, not the non-bottleneck." If you're in a company where frontend isn't the bottleneck — where the company's success doesn't depend on frontend quality — then improving your frontend skills has diminishing returns for your career within that organization. Move to where your skills are the bottleneck. That's where your improvement translates directly into business value, and business value translates into career advancement.

Hong-Zhi Lin: energized This is actually the most practical advice in this whole discussion. I moved from a hardware company's software division — where frontend was "necessary overhead" — to a SaaS startup where the frontend IS the product. Same me, same skills, but suddenly my a11y work and performance optimization went from "nice-to-have that nobody appreciates" to "core business capability that directly affects revenue." My salary went up 45%, and more importantly, my growth trajectory changed completely because the company invested in my development.

Kent Beck: There's a beautiful connection here to the open-vs-closed problem framework. In companies where frontend is a cost center, most frontend work is closed-problem: "implement this design." In companies where frontend IS the product, more work is inherently open-problem: "figure out how to make this user experience work." The environment selects for the type of work you do, which selects for the type of growth you get. Choose your environment deliberately.

Sarah Chen: I'll add a concrete filter for evaluating companies. Ask during the interview: "Can you describe a recent decision where the frontend team influenced the product direction?" If the answer is "We built what PM specified," that's a cost-center frontend team. If the answer is "Our frontend performance audit revealed that page load time was killing conversion, so we changed the product roadmap to prioritize performance," that's a profit-center frontend team. Join the second kind.

Dr. Ming-Zhe Chen: And there's a self-reinforcing dynamic here. Companies that treat frontend as a profit center attract better frontend engineers, which creates a stronger engineering culture, which makes the company more successful, which attracts even better engineers. It's a flywheel. The inverse is also true — companies that treat frontend as a cost center lose their best frontend engineers, which degrades their product, which confirms the bias that "frontend doesn't matter." Choose the upward spiral, not the downward one.

William Yeh: One more tactical point. When evaluating companies, look at their frontend-to-backend ratio. If a company has 50 backend engineers and 3 frontend engineers, frontend is clearly a cost center. If the ratio is closer to 1:1, or even frontend-heavy, the company values frontend as a core competency. This is a simple heuristic that's surprisingly predictive.

Ya-Ching Chang: Let me give specific Taiwan examples. Companies where frontend IS the product or core differentiator:

  • 91APP — their frontend IS the storefront for thousands of retailers. Frontend quality directly impacts merchant revenue.
  • Dcard — social platform where UX experience IS the product. Frontend performance and interaction quality determine user retention.
  • KKday — travel booking platform where the booking flow IS the revenue engine. Conversion rate is directly tied to frontend quality.
  • Appier — AI-powered marketing platform where the dashboard IS how customers interact with their AI. Frontend clarity determines customer success.

Companies where frontend is a support function (not bad companies, but different career dynamics):

  • Hardware companies' internal tools teams
  • Financial institutions' regulatory compliance UIs
  • Government contract software
  • B2B enterprise software with captive audiences

The 25-year-old should aim for the first category if they want to maximize both growth and impact.

Hong-Zhi Lin: adds And if you're currently at a company in the second category, don't quit tomorrow. Use it as a training ground. The pressure is lower, the pace is slower — use that breathing room to practice TDD, write specs, build your portfolio. Then move to the first category when you're ready. The lower-pressure environment is actually an advantage for learning — as long as you don't stay so long that you stagnate.

Kent Beck: That's a wise reframe. Every environment has learning opportunities — you just have to be intentional about extracting them. The worst career move isn't being in the wrong company — it's being in any company without intentionality about what you're learning.

Vote: What should be the #1 priority action?

ExpertVoteReasoning
William YehBecome the spec owner on your teamHighest leverage: changes your role, not just your skills
Ya-Ching ChangFix resume + target next role strategicallyCareer ROI: right environment accelerates everything else
Kent BeckMaster TDDFoundation for everything else: quality, design, AI verification
Hong-Zhi LinBuild a portfolio of solved hard problemsPractical: demonstrates value regardless of environment
Sarah ChenPick your vertical and go deepT-shaped depth creates an unfakeable moat
Dr. Ming-Zhe ChenPractice deliberate debuggingBuilds the #1 rated industry skill (89% "critical")

第四回合:具體行動方案——未來 12 個月該做什麼?

主持人: 理論很好,但這位 25 歲的工程師需要具體計畫。每位專家,給出你未來 12 個月的前三名建議。然後我們辯論優先順序。葉大師,先請。

葉大師: 我的三個,按優先順序:

1. 成為你團隊的規格擁有者(第 1-3 個月)。 對每個你實作的功能,在寫任何程式碼「之前」撰寫行為規格。不只是驗收標準——邊界案例、錯誤狀態、無障礙需求、效能預算。取得 PM 簽核。交付後追蹤規格是否足夠。3 個月內,你會成為 PM 在寫 ticket 前諮詢的人。

2. 建立驗證實踐(第 3-6 個月)。 開始寫測試,不只是為你的程式碼,而是為你的程式碼和系統之間的「契約」。整合測試、E2E 測試、視覺回歸測試。讓自己成為那個抓到 AI 遺漏的人。學會撰寫測試策略——說明「這是我們如何知道這個功能是正確的」的文件——而不只是個別測試案例。

3. 正確地學習 AI 輔助開發(第 6-12 個月)。 不是「偶爾用 Copilot」。我的意思是:學習程式碼生成的 prompt engineering、學習如何結構化你的程式碼庫讓 AI 工具更好用(Clean Code 作為基礎設施),以及學習批判性地評估 AI 輸出。目標是成為那個把 AI 當 10 倍槓桿的人,而不是當拐杖。

Kent Beck: 我的跟葉大師有重疊但重點不同:

1. 認真精通 TDD——不是嘴巴說說(第 1-4 個月)。 先寫測試。不是有時候。每次。第一個月會覺得慢。到第三個月,你會比沒有 TDD 時更快,因為你 debug 的時間趨近於零。更重要的是,你會養成在實作前定義「正確」的習慣——這正是讓你在 AI 輔助工作流程中不可取代的技能。

2. 透過重構學習設計模式(第 2-8 個月)。 不要讀設計模式的書——那是理論。而是拿你程式碼庫中凌亂的一部分去重構。套用一個模式。看看什麼改善了,什麼沒有。每月做一次。目標不是記住模式——是發展好設計的「品味」。品味是區分「幫我寫個元件」和「幫我寫個優雅融入這個架構的元件」的東西。

3. 貢獻開源專案(第 4-12 個月)。 不是為了充實履歷——是為了練習在模糊、多方利益關係人的環境中工作。開源維護者處理模糊的功能請求、衝突的意見、向後相容性和零組織權力。這些全是開放問題技能。選一個你每天在用的前端函式庫。從文件修正開始,然後是 bug 修正,然後是小功能。到第 12 個月,你就練習了跨團隊協作而不需要換工作。

Sarah Chen: T 型方法:

1. 選定你的垂直方向然後深入(第 1-6 個月)。 選「一個」領域:無障礙、效能、設計系統、複雜狀態管理,或即時 UI。不要分散。讀規範(對,a11y 的 W3C 規範、效能的瀏覽器引擎原始碼)。建立超越「我用過這個函式庫」的專業。目標:成為你團隊中別人會針對那個特定領域來諮詢的人。一個 25 歲的工程師如果是團隊中無可爭議的 a11y 專家,就有一條 AI 好幾年都碰不到的護城河。

2. 透過實踐建立 AI 素養(第 3-9 個月)。 設計一個結構化實驗。每週從你的 backlog 中拿一個真實任務,先嘗試用 AI 工具完成。記錄:什麼有效、什麼沒效、你在哪裡需要介入。追蹤你的準確率隨時間的變化。6 個月後,你會對 AI 的能力和限制有實證的理解——不只是意見。這讓你成為團隊中 AI 工作流整合的首選人。

3. 擴展你的水平線(第 6-12 個月)。 選一個跨職能技能來投資。選項:學基本的 UX 研究方法、跟一個 PM 一起跑一個 sprint、學可觀測性/監控,或貢獻到你團隊的 CI/CD pipeline。目標不是精通——是對話流暢度。當 PM 說「我們需要更好的錯誤追蹤」時,你應該能說「我來用自訂的 error boundary 設定 Sentry」——而不是「那是 DevOps 的事」。

林宏志: 我有不同的角度。我覺得其他建議都很好,但它們假設這個人在一個支持性的環境裡。讓我給務實版本:

1. 建立解決困難問題的作品集(第 1-12 個月,持續)。 開始一個私人工程日誌。對每個非平凡的問題,記錄:問題是什麼、為什麼困難、你嘗試了什麼方法、什麼有效、你學到了什麼。這是你面試、升遷和自我評估的彈藥。它也是部落格文章、技術演講和社群貢獻的原材料。大多數工程師無法表達自己的價值,因為他們不追蹤。

2. 深入學習瀏覽器,而不只是框架(第 1-8 個月)。 React、Vue、Next.js——這些都是抽象層。學底下的東西。瀏覽器怎麼渲染一個頁面?什麼觸發佈局重新計算?合成層怎麼運作?強制同步佈局和非同步佈局有什麼差別?這個深度讓你獨立於框架之外。當下一波框架浪潮來臨——它會來——你會在幾週內適應,而不是幾個月。AI 也無法偽裝這種理解深度。

3. 有意識地建立人脈(第 1-12 個月)。 加入台灣的前端社群聚會。參加 COSCUP、MOPCON 和 JSDC。不是為了蒐集名片——是為了與正在解決開放問題的工程師建立關係。你的人脈對職涯軌跡的決定性比你的程式碼更大。具體來說:找 2-3 位你尊敬的資深工程師,建立導師關係。問他們怎麼完成中階到資深的轉變。

張雅晴: 從人資角度:

1. 現在就修正你的履歷和 LinkedIn(第 1 個月)。 這聽起來很基本,但我篩選的 80% 的工程師履歷是列技術(「React、TypeScript、Next.js」)而不是影響力(「透過實作自訂虛擬化將頁面載入時間減少 40%,影響 200 萬月活躍使用者」)。用 PAR 格式重寫每一個要點:Problem(問題)、Action(行動)、Result(結果)。這一個改變就能增加 50% 的面試回覆率。

2. 在工作流程中使用 AI 並記錄下來(第 1-6 個月)。 企業正在積極尋找能展示 AI 輔助開發經驗的工程師。不是「我用了 Copilot」——而是「我設計了一個工作流程,AI 處理樣板程式碼生成而我專注在架構和測試上,這提高了我們團隊 30% 的產出」。那是招聘主管想要的具體、量化的經驗。

3. 策略性地瞄準你的下一個職位(第 6-12 個月)。 不要隨便投。找出 10-15 家有雅晴描述的工程文化的公司——工程師共同定義問題的。研究他們的技術棧、讀他們的工程部落格、在 LinkedIn 上找連結。用展示你理解他們特定挑戰的客製化 pitch 來投。一份有目標的申請勝過五十份亂投的。

Dr. 陳明哲: 技能發展的學術觀點:

1. 刻意練習除錯(第 1-6 個月)。 找一個複雜的開源前端專案——Next.js core、React 本身,或一個主要的元件庫。設好開發環境,開始研究開放的 issue。一開始不要嘗試修復——先試著理解根因。這建立了 89% 的公司評為關鍵的除錯與根因分析技能。它也建立你閱讀和理解複雜程式碼的能力——這正是 AI 輔助世界中需要的驗證技能。

2. 學習 AI 無法走捷徑的基礎知識(第 3-9 個月)。 我的意思是:作業系統概念(事件迴圈、並行)、網路(HTTP/2、WebSocket、CDN 行為)和資料結構。不是為了面試準備——是為了真正理解。當 AI 生成的程式碼因為不理解瀏覽器事件迴圈如何處理 microtask vs macrotask 而有微妙的效能問題時,你會是抓到的那個人。

3. 發展你自己的 AI 評估框架(第 6-12 個月)。 用你從除錯和基礎知識中學到的,建立一套系統性的方法來評估 AI 生成的程式碼。你先檢查什麼?常見的失敗模式是什麼?你怎麼驗證正確性,而不只是「它能編譯和執行」?記錄這個框架並分享給你的團隊。你成為品質守門員——迴圈中的人類,讓 AI 的輸出變得值得信賴。

主持人: 很多建議。關鍵的張力似乎是:先學 AI 工具,還是先學基礎知識?

葉大師: 基礎優先,但不是在真空中。你需要兩者——但如果你在沒有基礎的情況下學 AI 工具,你是在沙上建塔。如果你有基礎但沒有 AI 素養,你在白白浪費 10 倍的生產力乘數。

Kent Beck: 堅定地 基礎。永遠是基礎。工具每 18 個月就換。原則持久。我在 1999 年寫了第一本 XP 的書。工具完全不同了。原則——回饋、簡潔、溝通、勇氣——完全一樣。投資在原則上。

Sarah Chen: 我要稍微挑戰 Kent。在 2026 年,不學 AI 工具就像 2012 年不學 Git。是的,基礎更重要。但如果你不會用你的產業在跑的工具,你就是在不必要地削弱自己。第 1-3 個月應該包含「一些」結構化的 AI 工具學習,即使同時在深入基礎。

林宏志: 務實的答案是:不管你學什麼,用它交付東西。沒有實踐的理論是學術的。透過實際 TDD 一個功能來學 TDD。透過實際用 AI 工具建構東西來學 AI 工具。透過實際審查一個真實頁面來學 a11y。這位 25 歲的工程師不需要學習計畫——他們需要「建構」計畫。

Dr. 陳明哲: 研究支持結構化的方法——交錯基礎和工具。認知科學顯示,交錯練習(在不同類型的學習之間交替)比集中練習(一次精通一件事)產生更好的長期記憶保留。所以:一週深入 React 內部原理,下一週練習 AI 輔助開發,再下一週學習測試模式。多樣性讓你的大腦保持投入,也幫助你建立跨領域的連結。

張雅晴: 插入 我可以提一個有爭議的建議嗎?我認為這位 25 歲的工程師也應該考慮橫向移動——不只是垂直成長。我的意思是:如果你在一家把前端工程視為成本中心的公司——「就是讓畫面弄好看」——再多的技能發展都不會改變你被估值的方式。但如果你搬到一家前端工程是利潤中心的公司——UI 「就是」產品的 SaaS 公司——你同樣的技能突然有 3 倍的價值。

我見過工程師透過從代理商或傳統公司跳到前端是核心業務的產品公司,薪資跳了 40-60%。同樣的技能,不同的脈絡,戲劇性不同的薪資和成長潛力。

葉大師: 雅晴提了一個很好的觀點,我想連結到 TOC。在約束理論中,我們說「優化瓶頸,不是非瓶頸」。如果你在一家前端不是瓶頸的公司——公司的成功不取決於前端品質——那麼提升你的前端技能對你在「那個組織內」的職涯回報遞減。搬到你的技能是瓶頸的地方。那裡你的進步直接轉化為商業價值,而商業價值轉化為職涯發展。

林宏志: 振奮地 這其實是整個討論中最務實的建議。我從一家硬體公司的軟體部門——前端是「必要的額外開銷」——搬到了一家前端「就是」產品的 SaaS 新創。同樣的我,同樣的技能,但突然我的 a11y 工作和效能優化從「沒人感謝的有也不錯」變成「直接影響營收的核心商業能力」。我的薪水漲了 45%,更重要的是,我的成長軌跡完全改變了,因為公司投資在我的發展上。

Kent Beck: 這裡有個跟開放 vs 封閉問題框架的美妙連結。在前端是成本中心的公司裡,大多數前端工作是封閉問題:「實作這個設計」。在前端「就是」產品的公司裡,更多工作本質上是開放問題:「搞清楚如何讓這個使用者體驗運作」。環境選擇了你做的工作類型,而工作類型選擇了你得到的成長類型。刻意地選擇你的環境。

Sarah Chen: 我補充一個評估公司的具體篩選條件。面試時問:「你能描述一個最近前端團隊影響了產品方向的決策嗎?」如果答案是「我們建構了 PM 指定的東西」,那就是成本中心型前端團隊。如果答案是「我們的前端效能審計發現頁面載入時間正在殺死轉換率,所以我們改變了產品路線圖來優先處理效能」,那就是利潤中心型前端團隊。加入第二種。

Dr. 陳明哲: 這裡有一個自我強化的動態。把前端視為利潤中心的公司吸引更好的前端工程師,這創造更強的工程文化,讓公司更成功,又吸引更好的工程師。這是飛輪效應。反過來也是真的——把前端視為成本中心的公司流失他們最好的前端工程師,這降低了產品品質,又驗證了「前端不重要」的偏見。選擇向上的螺旋,不要選向下的。

葉大師: 再一個戰術性的觀點。評估公司時,看他們的前端對後端比例。如果一家公司有 50 個後端工程師和 3 個前端工程師,前端顯然是成本中心。如果比例接近 1:1,甚至前端更多,公司重視前端作為核心競爭力。這是一個簡單的啟發法,預測力驚人地好。

張雅晴: 讓我給具體的台灣例子。前端「就是」產品或核心差異化因素的公司:

  • 91APP —— 他們的前端「就是」數千零售商的店面。前端品質直接影響商家營收。
  • Dcard —— 社交平台,UX 體驗「就是」產品。前端效能和互動品質決定使用者留存。
  • KKday —— 旅遊預訂平台,預訂流程「就是」營收引擎。轉換率直接綁定前端品質。
  • Appier —— AI 驅動的行銷平台,儀表板「就是」客戶與 AI 互動的方式。前端清晰度決定客戶成功。

前端是支援功能的公司(不是壞公司,但職涯動態不同):

  • 硬體公司的內部工具團隊
  • 金融機構的法規合規 UI
  • 政府合約軟體
  • 有固定客群的 B2B 企業軟體

這位 25 歲的工程師如果想最大化成長和影響力,應該瞄準第一類。

林宏志: 補充 如果你目前在第二類的公司,不要明天就辭職。把它當作「訓練場」。壓力較低、節奏較慢——用那個喘息空間練習 TDD、寫規格、建立作品集。然後在你準備好的時候移到第一類。較低壓力的環境其實是學習的優勢——只要你不待到停滯。

Kent Beck: 那是一個聰明的重新框架。每個環境都有學習機會——你只需要有意識地去提取它們。最差的職涯動作不是在錯的公司——是在任何公司卻沒有對你在學什麼的刻意性。

投票:第一優先行動是什麼?

專家投票理由
葉大師成為你團隊的規格擁有者最高槓桿:改變你的角色,而不只是技能
張雅晴修正履歷 + 策略性瞄準下一個職位職涯 ROI:對的環境加速其他一切
Kent Beck精通 TDD其他一切的基礎:品質、設計、AI 驗證
林宏志建立解決困難問題的作品集務實:無論環境如何都能展示價值
Sarah Chen選定垂直方向然後深入T 型深度創造無法偽造的護城河
Dr. 陳明哲刻意練習除錯建立排名第 1 的產業技能(89%「關鍵」)

Round 5: Taiwan-Specific Market Considerations

Moderator: We've been mostly industry-agnostic so far. But this engineer is in Taiwan, and the Taiwan market has unique dynamics. Ya-Ching, paint the picture.

Ya-Ching Chang: Let me give the honest, unfiltered view.

Salary reality: A mid-level frontend engineer in Taipei makes roughly NT$70,000-90,000/month (about US$2,200-2,800). A senior frontend engineer at a top-tier Taiwan tech company — TSMC's software division, Appier, Gogoro's tech team — tops out around NT$120,000-160,000/month (US$3,800-5,000). Compare that to a mid-level frontend engineer in the Bay Area making US$12,000-18,000/month. The gap is 3-5x.

Job market structure: Taiwan's tech industry is hardware-dominant. TSMC, MediaTek, ASE, and Delta employ massive engineering forces — but mostly hardware and embedded engineers. Software engineering, and frontend specifically, is concentrated in a smaller ecosystem: e-commerce (momo, PChome, Shopee Taiwan), fintech (LINE Bank, Cathay Financial), SaaS (91APP, Appier, KKday), and agencies. Total addressable market for frontend roles is maybe 2,000-3,000 positions across Taiwan.

AI adoption pace: Slower than Silicon Valley, faster than most of Asia. About 35% of Taiwan tech companies have formal AI tool policies. But the adoption is uneven — international-facing companies (Appier, Trend Micro, CoolBitX) are aggressive; traditional companies (many hardware firms' software divisions) are still debating whether to allow ChatGPT on company networks.

Hong-Zhi Lin: And there's a cultural factor nobody talks about. In many Taiwanese companies, especially the traditional ones, the engineering career ladder tops out at "team lead" or "technical manager." There's no Staff Engineer track, no Principal Engineer title. If you want to grow as an individual contributor, you either go abroad, join a foreign company's Taiwan office, or start freelancing. That ceiling is real, and it affects whether the "go deep" strategy is viable long-term.

William Yeh: Hong-Zhi is right about the ceiling, and I want to reframe it. The lack of an IC career ladder in Taiwan is a bottleneck — but it's also an opportunity. Because most Taiwanese engineers are funneled into management, there's a shortage of deep technical experts. If you become the definitive expert in performance optimization or accessibility in Taiwan's frontend community, you have almost no competition. You become the person companies hire as a consultant when their in-house team can't solve a problem.

I know three frontend engineers in Taiwan who earn NT$250,000+/month (US$8,000+) as independent consultants — more than any salaried position would pay them. They built their reputation through community contributions, conference talks, and a track record of solving hard problems. That's a viable path for this 25-year-old — but it takes 5-7 years of deliberate reputation building.

Sarah Chen: Let me add the "go abroad" perspective, since I took that path. I left Taiwan at 26, joined a US company's remote team first, then relocated to the Bay Area. Some observations:

The case for going abroad:

  • 3-5x salary multiplier — even remote positions at US companies pay 2-3x Taiwan salaries
  • Exposure to world-class engineering practices, code review culture, and career ladders
  • Network effects — one connection at a FAANG company opens doors you didn't know existed
  • You learn to operate in English, which permanently expands your opportunity set

The case for staying:

  • Cost of living in Taipei vs San Francisco is 3-4x different — the real purchasing power gap is smaller than the salary gap suggests
  • Taiwan's tech community is tight-knit — you can build deep relationships and reputation faster
  • Remote work has made the "abroad" option available without physically relocating
  • Quality of life: healthcare, safety, food, work-life balance — Taiwan is genuinely world-class

My honest recommendation: for this 25-year-old, don't relocate yet. Instead, target remote positions at international companies while living in Taiwan. You get the salary uplift (typically 1.5-2.5x Taiwan rates), the exposure to global practices, and the Taiwan quality of life. Relocate later if you want to — but you don't have to.

Kent Beck: with interest I want to pick up on something Sarah said — the remote work angle. In the post-AI world, remote becomes even more viable. If your value is in problem definition, verification, and governance — not in pairing in a physical office — geography matters less. The engineer who masters the four accountability questions William described can work for any company, from anywhere.

But — and this is important — remote work requires even stronger communication skills. You need to write clearly, present ideas asynchronously, build trust without in-person interaction, and manage your own time. These are soft skills that many engineers underinvest in. If this 25-year-old wants to go remote, they should start practicing written communication NOW. Write design documents. Write RFC proposals. Write detailed PR descriptions. Every piece of writing is practice for remote work communication.

Hong-Zhi Lin: cautionary tone I want to add a reality check on the remote path. I've tried it. Not every international company treats remote engineers in Asia equally. Some companies have a "core team" in the US office and "satellite" remote engineers who get less interesting work, fewer promotion opportunities, and are first to be cut in layoffs. Research the company's remote culture before jumping in. Ask: "What percentage of your engineering leadership works remotely?" If the answer is close to zero, that's your signal.

Ya-Ching Chang: Hong-Zhi is absolutely right. Let me add more Taiwan-specific hiring intelligence:

Companies with strong remote/Taiwan engineering culture (as of 2026):

  • Vercel — significant Taiwan-connected engineering presence
  • Automattic — fully distributed, strong frontend team
  • GitLab — all-remote since founding
  • Shopify — remote-first since 2020
  • Several crypto/Web3 companies with Asia-Pacific focus

Taiwan companies with emerging IC career ladders:

  • Appier — has introduced Staff Engineer track
  • 91APP — engineering-driven culture with technical growth paths
  • LINE Taiwan — mirrors LINE Japan's IC ladder
  • Trend Micro — established Principal Engineer path

What hiring managers look for in 2026 specifically:

  1. AI-assisted development experience (mentioned in 45% of senior frontend JDs now)
  2. TypeScript proficiency (non-negotiable — up from 60% to 92% of JDs in two years)
  3. Testing culture evidence (growing — 38% of JDs now mention testing expectations)
  4. Cross-functional collaboration evidence (the "soft skills" section is expanding in JDs)

Dr. Ming-Zhe Chen: I want to add the education and ecosystem perspective. Taiwan's computer science education is rigorous in fundamentals — algorithms, data structures, operating systems. But it's weak in software engineering practice — testing, CI/CD, code review, agile methodologies. This creates a gap that self-directed learning must fill.

The good news is Taiwan's developer community is surprisingly active for its size. Communities like React Taipei, Vue.js Taiwan, Frontend Developers Taiwan (on Facebook), and the annual JSDC conference provide learning and networking opportunities. I'd strongly encourage this 25-year-old to become an active community member — not just attending, but giving talks. The act of teaching forces you to systematize your knowledge, which accelerates the novice-to-expert progression.

We also have a growing number of "study groups" (讀書會) focused on specific topics — TDD, system design, AI tools. These are low-commitment, high-value learning environments. The 25-year-old should join at least one and commit for 6 months.

William Yeh: Let me add one more Taiwan-specific observation. The AI adoption curve in Taiwan creates a window of opportunity. Because many traditional companies are slow to adopt AI tools, engineers who master AI-assisted workflows NOW have a 12-18 month head start. By the time those companies catch up, this engineer can be the internal expert who helps the organization transition. That's a career-defining position — and it's available right now in Taiwan in a way it isn't in Silicon Valley, where everyone is already using AI tools.

Sarah Chen: That's a brilliant point. Being the "AI bridge" person in a company that's just starting to adopt AI tools is incredibly high-leverage. You become indispensable not because of your code, but because of your knowledge of how to integrate AI into engineering workflows. In Silicon Valley, that role is already filled. In Taiwan, it's wide open.

Ya-Ching Chang: Let me add a salary negotiation angle that's specific to Taiwan. Most engineers in Taiwan are terrible at negotiating because the culture discourages it. But here's the data: among my candidates who negotiated, the average salary increase over the initial offer was 12-18%. Among those who didn't negotiate, it was 0%. That's NT$10,000-15,000/month left on the table.

Specific negotiation advice for this 25-year-old:

  • Always ask for the salary band for the role. If they won't share it, that's a red flag about transparency.
  • Quantify your contributions in dollar terms. "I reduced page load time by 2 seconds, which our analytics showed increased conversion by 8%" is worth more than "I optimized performance."
  • If you have AI-assisted development experience, mention it explicitly. It's a differentiator right now in Taiwan because so few engineers can articulate it.
  • Negotiate for learning opportunities, not just salary: conference budget, training budget, allocated open-source time, or a rotation into a more challenging team.

Hong-Zhi Lin: Ya-Ching, that's great advice. Let me add the freelance/consulting angle, since William mentioned it. I've done some consulting on the side — helping companies with complex frontend migrations, performance audits, and accessibility compliance. The rates are surprising: NT$3,000-8,000/hour for specialized frontend consulting in Taiwan. That's 2-3x the equivalent hourly rate of a salaried position. But the key is specialization. Nobody pays premium rates for "I can build React apps." They pay premium rates for "I can audit your app's Core Web Vitals and guarantee a 40% improvement" or "I can make your app WCAG 2.1 AA compliant."

Kent Beck: interested The consulting angle ties back to the moat discussion beautifully. A consultant is, by definition, someone hired to solve open problems. Companies don't hire consultants for closed problems — they hire employees or AI for that. The consulting path is the ultimate expression of the open-problem skillset. Even if this 25-year-old doesn't go full-time consulting, doing occasional consulting work on the side is the best training ground for open-problem skills.

Dr. Ming-Zhe Chen: And there's an academic dimension to the Taiwan opportunity. Several universities — NTU, NTHU, NYCU — are now offering industry collaboration programs where companies can sponsor research on applied AI in software engineering. If this engineer wants to stay cutting-edge, participating in these collaborations (even informally, through connections with university labs) gives access to research insights before they become mainstream. Taiwan's small size means the degree of separation between industry and academia is remarkably short. Use that.

William Yeh: One final Taiwan-specific point. The government is actively pushing the "AI Taiwan" initiative with significant subsidies for AI adoption in traditional industries. Companies that apply for these subsidies need internal champions who understand both the technology and the business context. If our 25-year-old positions themselves as the person who can bridge AI tools and business needs within their company, they might find themselves aligned with a government-backed initiative, which means more budget, more visibility, and more organizational support for their growth.

Hong-Zhi Lin: sarcastically Government initiatives — that's optimistic, William. In my experience, those subsidies mostly go to managers who write impressive proposals and then struggle with execution. But I take the point: if you can be the technical person who makes the execution actually work, you're in a valuable position.

Sarah Chen: Let me add one more thing about the remote international path specifically. For anyone in Taiwan considering this: build your English writing skills. Not conversational English — technical writing in English. Write your PR descriptions in English. Write your design docs in English. Blog in English. The single biggest barrier for Taiwanese engineers entering the international job market isn't technical skill — it's the ability to communicate complex technical ideas clearly in written English. Invest in this, and the international market opens up dramatically.

Kent Beck: I want to underscore Sarah's point with a specific example. I've worked with engineers from dozens of countries. The ones from non-English-speaking countries who succeeded internationally all had one thing in common: they wrote clearly. Not perfectly — clearly. There's a difference. Perfect English is unnecessary; clear English is essential. Can you explain a technical decision in three paragraphs that a designer can understand? Can you write a bug report that contains the reproduction steps, expected behavior, and actual behavior — all in English? Can you write a Slack message that gets your point across without a 30-minute follow-up call? Those are the writing skills that matter.

Hong-Zhi Lin: And let me add a practical tip for Taiwanese engineers specifically. Start an English-language technical blog. Write one post per month. It doesn't have to be brilliant — it has to be clear. Topics: debugging stories, technology comparisons, lessons learned from production incidents. Three benefits: you practice English writing, you build a public portfolio, and you force yourself to articulate your thinking — which is the exact skill that separates mid-level from senior.

I know engineers in Taiwan who got recruited by international companies directly through their blog posts. One colleague at my previous company wrote a detailed post about debugging a complex Next.js rendering issue. A Vercel recruiter found it, reached out, and he's now on their team earning 2.5x his previous salary — all because of one well-written technical blog post in English.

Ya-Ching Chang: That's a perfect example of what I mean by "make your growth visible." A blog post is permanent evidence of your expertise. It works 24/7 as a passive recruiter. And it compounds — each new post makes all your previous posts more discoverable. If the 25-year-old starts writing one English blog post per month today, in 12 months they'll have 12 pieces of public evidence of their expertise. That's more powerful than any certification.

Dr. Ming-Zhe Chen: One caveat on blogging: write about things you've actually done, not things you've merely read about. "I read about React Server Components" is weak. "I migrated our app from client-side rendering to RSC and here's what broke, why, and how I fixed it" is strong. The difference is earned knowledge versus borrowed knowledge. Hiring managers can tell the difference instantly.

Moderator: Let's vote on the Taiwan-specific question.

Vote: Stay in Taiwan or go abroad?

ExpertVoteReasoning
William YehStay + become the AI bridgeTaiwan's slower AI adoption creates a unique window of opportunity
Ya-Ching ChangStay + target remote international rolesBest of both worlds: global salary, Taiwan quality of life
Kent BeckGeography is irrelevant if you master communicationRemote work makes location secondary to skill
Hong-Zhi LinStay for now, but prepare the option to leaveBuild skills and network that work in any market
Sarah ChenStay + remote international, relocate later if desiredDon't rush to relocate — the remote option is better than it was 5 years ago
Dr. Ming-Zhe ChenStay + invest heavily in Taiwan's developer communityCommunity leadership builds reputation faster in a smaller market

第五回合:台灣市場的特殊考量

主持人: 到目前為止我們大多是產業通用的。但這位工程師在台灣,而台灣市場有獨特的動態。雅晴,描繪一下全景。

張雅晴: 讓我給一個誠實、未經過濾的觀點。

薪資現實: 台北的中階前端工程師月薪大約 NT$70,000-90,000(約 US$2,200-2,800)。在頂尖台灣科技公司——台積電軟體部門、Appier、Gogoro 技術團隊——的資深前端工程師,上限約 NT$120,000-160,000/月(US$3,800-5,000)。相比之下,灣區的中階前端工程師月入 US$12,000-18,000。差距是 3-5 倍。

就業市場結構: 台灣科技業以硬體為主。台積電、聯發科、日月光、台達電僱用大量工程人力——但主要是硬體和嵌入式工程師。軟體工程,特別是前端,集中在一個較小的生態系:電商(momo、PChome、蝦皮台灣)、金融科技(LINE Bank、國泰金控)、SaaS(91APP、Appier、KKday)和代理商。全台灣前端職缺的總目標市場大概只有 2,000-3,000 個。

AI 採用速度: 比矽谷慢,比亞洲大部分地區快。約 35% 的台灣科技公司有正式的 AI 工具政策。但採用不均衡——面向國際的公司(Appier、趨勢科技、CoolBitX)很積極;傳統公司(許多硬體公司的軟體部門)還在辯論是否允許在公司網路上使用 ChatGPT。

林宏志: 還有一個沒人談的文化因素。在很多台灣公司裡,尤其是傳統公司,工程師職涯階梯的頂端是「team lead」或「技術經理」。沒有 Staff Engineer 軌道,沒有 Principal Engineer 職稱。如果你想作為個人貢獻者成長,你要嘛出國、加入外商在台辦公室,或開始接案。這個天花板是真的,而且它影響「走深」策略是否長期可行。

葉大師: 宏志說的天花板是對的,我想重新框架它。台灣缺乏 IC 職涯階梯是一個「瓶頸」——但也是一個「機會」。因為大多數台灣工程師被導向管理職,深度技術專家嚴重短缺。如果你成為台灣前端社群中效能優化或無障礙的權威專家,你幾乎沒有競爭者。你會成為那個當公司內部團隊解決不了問題時,被請來當顧問的人。

我認識三位台灣的前端工程師,作為獨立顧問月入 NT$250,000+(US$8,000+)——比任何受薪職位都高。他們透過社群貢獻、研討會演講和解決困難問題的紀錄建立了聲譽。這對 25 歲的工程師是可行的路徑——但需要 5-7 年的刻意聲譽建設。

Sarah Chen: 讓我補充「出國」的觀點,因為我走過那條路。我 26 歲離開台灣,先加入美國公司的遠端團隊,然後搬到灣區。一些觀察:

出國的理由:

  • 3-5 倍薪資乘數——即使是美國公司的遠端職位也付台灣薪資的 2-3 倍
  • 接觸世界級的工程實踐、code review 文化和職涯階梯
  • 人脈效應——在 FAANG 公司的一個連結能打開你不知道存在的門
  • 你學會用英文工作,這永久地擴展了你的機會集

留下的理由:

  • 台北 vs 舊金山的生活成本差 3-4 倍——「真正的」購買力差距比薪資差距小
  • 台灣的技術社群緊密——你能更快建立深度關係和聲譽
  • 遠端工作讓「海外」選項不需要實體搬遷就能實現
  • 生活品質:健保、安全、美食、工作生活平衡——台灣確實是世界級的

我的誠實建議:對這位 25 歲的工程師,先不要搬家。相反,在住台灣的同時瞄準國際公司的遠端職位。你能得到薪資提升(通常是台灣行情的 1.5-2.5 倍)、接觸全球實踐的機會,以及台灣的生活品質。如果你想,之後再搬——但你不必。

Kent Beck: 感興趣地 我想接 Sarah 說的——遠端工作的角度。在後 AI 時代,遠端變得更加可行。如果你的價值在問題定義、驗證和治理——不是在實體辦公室配對程式設計——地理就不那麼重要。精通葉大師描述的四個問責問題的工程師,可以從任何地方為任何公司工作。

但——這很重要——遠端工作需要更強的溝通技能。你需要寫得清楚、非同步地呈現想法、在沒有面對面互動的情況下建立信任,以及管理自己的時間。這些是很多工程師投資不足的軟技能。如果這位 25 歲的工程師想走遠端,他們應該「現在」就開始練習書面溝通。寫設計文件。寫 RFC 提案。寫詳細的 PR 描述。每一篇寫作都是遠端工作溝通的練習。

林宏志: 警告的語氣 我想對遠端路徑加個現實檢核。我試過。不是每家國際公司都平等對待亞洲的遠端工程師。有些公司有美國辦公室的「核心團隊」和「衛星」遠端工程師,後者得到較無趣的工作、較少的升遷機會,而且裁員時最先被砍。跳進去之前研究公司的遠端文化。問:「你們的工程領導層有多大比例是遠端工作的?」如果答案接近零,那就是你的信號。

張雅晴: 宏志完全正確。讓我補充更多台灣特定的招聘情報:

具有良好遠端/台灣工程文化的公司(截至 2026 年):

  • Vercel —— 有顯著的台灣關聯工程人員
  • Automattic —— 完全分散式,強大的前端團隊
  • GitLab —— 創立以來就全遠端
  • Shopify —— 2020 年起遠端優先
  • 數家聚焦亞太的加密/Web3 公司

具有新興 IC 職涯階梯的台灣公司:

  • Appier —— 已引入 Staff Engineer 軌道
  • 91APP —— 工程驅動文化,有技術成長路徑
  • LINE 台灣 —— 對齊 LINE 日本的 IC 階梯
  • 趨勢科技 —— 已建立 Principal Engineer 路徑

2026 年招聘主管特別在找的:

  1. AI 輔助開發經驗(現在 45% 的資深前端 JD 中提到)
  2. TypeScript 熟練度(不可協商——兩年內從 60% 上升到 92% 的 JD)
  3. 測試文化證據(增長中——38% 的 JD 現在提到測試期望)
  4. 跨職能協作證據(JD 中的「軟技能」區塊在擴大)

Dr. 陳明哲: 我想加入教育和生態系的觀點。台灣的資訊科學教育在基礎方面很紮實——演算法、資料結構、作業系統。但在軟體工程實踐方面薄弱——測試、CI/CD、code review、敏捷方法論。這創造了一個自主學習必須填補的差距。

好消息是台灣的開發者社群相對於其規模來說出奇活躍。像 React Taipei、Vue.js Taiwan、前端社群 Frontend Developers Taiwan(Facebook 上)和年度 JSDC 大會提供了學習和社交的機會。我強烈建議這位 25 歲的工程師成為活躍的社群成員——不只是參加,而是給演講。教學的行為迫使你系統化你的知識,這加速了從新手到專家的進程。

我們也有越來越多的「讀書會」專注於特定主題——TDD、系統設計、AI 工具。這些是低承諾、高價值的學習環境。這位 25 歲的工程師應該至少加入一個,承諾投入 6 個月。

葉大師: 讓我再加一個台灣特定的觀察。台灣的 AI 採用曲線創造了一個機會窗口。因為很多傳統公司在 AI 工具採用上較慢,「現在」精通 AI 輔助工作流程的工程師有 12-18 個月的先行優勢。等到那些公司跟上時,這位工程師可以成為幫助組織轉型的內部專家。那是一個定義職涯的位置——而且它現在在台灣是可得的,在矽谷則不然,因為那裡每個人都已經在用 AI 工具了。

Sarah Chen: 那是一個絕妙的觀點。在一家剛開始採用 AI 工具的公司裡成為「AI 橋樑」角色,是難以置信的高槓桿。你變得不可或缺,不是因為你的程式碼,而是因為你知道如何把 AI 整合進工程工作流程。在矽谷,那個角色已經被填滿了。在台灣,它是大開的。

張雅晴: 讓我補充一個台灣特有的薪資談判角度。台灣大多數工程師不擅長談判,因為文化不鼓勵。但數據是這樣的:在我的候選人中,有談判的人,平均薪資比最初 offer 高了 12-18%。沒有談判的,是 0%。那是每月 NT$10,000-15,000 被留在桌上。

給這位 25 歲工程師的具體談判建議:

  • 永遠詢問該職位的薪資級距。如果他們不願分享,那是透明度的紅旗。
  • 用金額量化你的貢獻。「我將頁面載入時間減少了 2 秒,我們的分析顯示這提高了 8% 的轉換率」比「我優化了效能」更有價值。
  • 如果你有 AI 輔助開發經驗,明確提出。這在台灣目前是差異化因素,因為很少工程師能清楚表達它。
  • 談判學習機會,不只是薪資:研討會預算、培訓預算、開源時間配額,或輪調到更有挑戰性的團隊。

林宏志: 雅晴,那是很好的建議。讓我補充自由接案/顧問的角度,因為葉大師提到了。我做過一些兼職顧問——幫公司處理複雜的前端遷移、效能審計和無障礙合規。費率令人驚訝:台灣的專業前端顧問時薪 NT$3,000-8,000。那是受薪職位等效時薪的 2-3 倍。但關鍵是專業化。沒人會為「我能建 React 應用」付高價。他們為「我能審計你的應用 Core Web Vitals 並保證 40% 的改善」或「我能讓你的應用符合 WCAG 2.1 AA」付高價。

Kent Beck: 感興趣 顧問角度與護城河討論完美呼應。顧問,按定義,就是被僱來解決開放問題的人。公司不會為了封閉問題僱用顧問——他們為此僱用員工或 AI。顧問路徑是開放問題技能集的終極表達。即使這位 25 歲的工程師不全職做顧問,偶爾做一些兼職顧問工作是開放問題技能的最佳訓練場。

Dr. 陳明哲: 台灣機會還有一個學術面向。幾所大學——台大、清大、陽明交大——現在提供產業合作計畫,公司可以贊助 AI 在軟體工程中的應用研究。如果這位工程師想保持在最前沿,參與這些合作(即使是非正式的,透過與大學實驗室的連結)能在研究見解成為主流之前就取得。台灣的小規模意味著產業和學術之間的分隔度出奇地短。善用它。

葉大師: 最後一個台灣特定的觀點。政府正在積極推動「AI 台灣」倡議,對傳統產業的 AI 採用提供大量補助。申請這些補助的公司需要內部的推動者,同時理解技術和商業脈絡。如果我們的 25 歲工程師把自己定位為能在公司內橋接 AI 工具和商業需求的人,他可能會發現自己與一個政府支持的倡議對齊,這意味著更多預算、更多能見度、更多組織對他成長的支持。

林宏志: 諷刺地 政府倡議——那很樂觀,葉大師。以我的經驗,那些補助大多流向寫出印象深刻的提案但執行上掙扎的主管。但我接受這個觀點:如果你能成為讓執行真正成功的技術人員,你就在一個有價值的位置。

Sarah Chen: 讓我對遠端國際路徑再補充一點。對台灣任何考慮這條路的人:建立你的英文寫作能力。不是會話英文——是英文的「技術寫作」。用英文寫你的 PR 描述。用英文寫你的設計文件。用英文寫部落格。台灣工程師進入國際就業市場的最大障礙不是技術能力——是用書面英文清楚溝通複雜技術想法的能力。投資在這上面,國際市場就會戲劇性地打開。

Kent Beck: 我想用一個具體的例子來強調 Sarah 的觀點。我跟來自數十個國家的工程師共事過。來自非英語國家卻在國際上成功的人,都有一個共同點:他們寫得清楚。不是完美——是清楚。有差別的。完美的英文不必要;清楚的英文是必備的。你能用三段話解釋一個技術決策,讓設計師也能理解嗎?你能寫一份 bug 報告,包含重現步驟、預期行為和實際行為——全部用英文嗎?你能寫一條 Slack 訊息,傳達你的觀點而不需要 30 分鐘的後續通話嗎?那些才是重要的寫作技能。

林宏志: 讓我給台灣工程師特別加一個實用建議。開始一個英文技術部落格。每月寫一篇文章。不用寫得很厲害——要寫得清楚。題目:除錯故事、技術比較、從生產環境事故中學到的教訓。三個好處:你練習英文寫作、建立公開的作品集,而且你強迫自己表達你的思考——那正是區分中階和資深的技能。

我認識台灣的工程師直接透過他們的部落格文章被國際公司挖角。我前一家公司的一個同事寫了一篇詳細的文章,關於 debug 一個複雜的 Next.js 渲染問題。一個 Vercel 的 recruiter 發現了它,主動聯繫,他現在在他們的團隊裡賺他之前薪水的 2.5 倍——全都因為一篇寫得很好的英文技術部落格文章。

張雅晴: 那是「讓你的成長可見」的完美例子。一篇部落格文章是你專業的永久證據。它 24/7 作為被動 recruiter 工作。而且它會複利——每一篇新文章讓你之前的所有文章更容易被發現。如果這位 25 歲的工程師今天開始每月寫一篇英文部落格文章,12 個月後他們會有 12 篇公開的專業證據。那比任何證照都更有力。

Dr. 陳明哲: 關於寫部落格有一個注意事項:寫你實際做過的事,不是你只是讀過的東西。「我讀了關於 React Server Components 的文章」很弱。「我把我們的應用從 client-side rendering 遷移到 RSC,這裡是什麼壞了、為什麼、以及我怎麼修的」很強。差別在於掙來的知識 vs 借來的知識。招聘主管能瞬間看出差別。

主持人: 來投票台灣特定的問題。

投票:留在台灣 vs 出海發展?

專家投票理由
葉大師留下 + 成為 AI 橋樑台灣較慢的 AI 採用創造了獨特的機會窗口
張雅晴留下 + 瞄準遠端國際職位兩全其美:全球薪資,台灣生活品質
Kent Beck如果精通溝通,地理無關遠端工作讓位置次於技能
林宏志先留下,但準備離開的選項建立在任何市場都管用的技能和人脈
Sarah Chen留下 + 遠端國際,之後有需要再搬不要急著搬家——遠端選項比 5 年前好多了
Dr. 陳明哲留下 + 大力投資台灣開發者社群社群領導力在較小的市場中更快建立聲譽

Bonus Round: Rapid-Fire Questions from the Audience

Moderator: Before we close, we have some rapid-fire questions from the audience. Quick answers only — 2-3 sentences max per expert.

Audience Q1: "Should I learn a second programming language, or go deeper in JavaScript/TypeScript?"

Hong-Zhi Lin: Go deeper in JS/TS first. Most frontend engineers only scratch the surface — they use async/await without understanding Promises, generators, or the microtask queue. That depth matters more than breadth right now.

Sarah Chen: Disagree slightly. Learning Rust or Go, even superficially, gives you a systems-level mental model that makes you a better frontend engineer. You start understanding performance at a lower level. But this is a months-7-12 investment, not month-1.

William Yeh: Learn whatever language lets you write specs and tests more effectively. If that's TypeScript with strict mode, great. If it's learning SQL to better understand your data contracts, even better. The language matters less than what you do with it.

Audience Q2: "Is it worth getting AWS/GCP certifications?"

Ya-Ching Chang: bluntly For a frontend engineer in Taiwan? No. Certifications impress HR screeners at large companies, but engineering managers care about demonstrated skill, not badges. Spend that time contributing to open source or building a portfolio project instead.

Kent Beck: Certifications test your ability to memorize — which is the exact capability that AI makes worthless. Invest in unteachable skills: judgment, taste, communication.

Dr. Ming-Zhe Chen: One exception: if you want to move into a DevOps or platform engineering role, a Kubernetes certification (CKA) has genuine industry signal. But for frontend specifically, I agree with Ya-Ching.

Audience Q3: "I'm thinking about getting a master's degree in CS. Good idea?"

Dr. Ming-Zhe Chen: It depends on your goal. If you want to do AI/ML research or enter a company that values advanced degrees (Google, Academia), yes. If you want to be a better frontend engineer, no — the ROI of 2 years of industry experience plus deliberate self-study is higher than a master's degree. Biased answer from a professor, I know, but I'm being honest.

Sarah Chen: At Meta, a master's degree added about 5-10% to starting salary but had no measurable impact on promotion velocity after year 2. Experience and demonstrated impact mattered far more.

Hong-Zhi Lin: Save the tuition money. Use it to attend international conferences, buy courses, and fund 6 months of reduced work hours for intensive self-study. That's a better investment.

Audience Q4: "What's the single best resource for a frontend engineer who wants to level up?"

Kent Beck: My honest answer: read other people's code. Not tutorials. Not courses. Pick a library you use — React, Next.js, a state management library — and read the source code. The best engineers I've ever worked with were voracious code readers.

Hong-Zhi Lin: The Chrome DevTools documentation. Seriously. Most frontend engineers use 10% of what DevTools can do. Learning the Performance panel, Memory panel, and Lighthouse in depth will 10x your debugging capability.

William Yeh: The Phoenix Project by Gene Kim. It's a novel about operations and bottleneck management, but the TOC principles apply directly to how you think about your own career bottlenecks.

Ya-Ching Chang: LinkedIn. Not for scrolling — for researching. Find 10 senior frontend engineers at companies you admire. Study their career trajectory. What did they do at year 2? Year 5? Reverse-engineer their path.

Sarah Chen: The Web.dev and MDN documentation. These are written by browser engineers and contain the deepest, most accurate information about how the web platform works. If you've only used MDN as an API reference, you're missing 80% of its value.

Dr. Ming-Zhe Chen: Academic papers. I know it sounds intimidating, but start with ICSE, ESEC/FSE, and CHI papers on developer productivity and software engineering. These give you frameworks for thinking about your work that no blog post or course provides. Use Semantic Scholar to find accessible survey papers.

Audience Q5: "Should I build side projects or contribute to open source?"

Kent Beck: Both, but open source first. Side projects prove you can build things. Open source proves you can collaborate, handle feedback, and work in complex existing codebases — which is what real engineering work looks like.

Hong-Zhi Lin: Side projects if you want to explore a technology. Open source if you want to grow as an engineer. They develop different muscles. At 25, I'd prioritize open source.

Ya-Ching Chang: From a hiring standpoint, one meaningful open source contribution (a merged PR with substantive discussion) is worth ten side projects that nobody uses. The contribution shows collaboration; the side project shows initiative. You need both, but the contribution is rarer and more impressive.

Audience Q6: "How do I deal with imposter syndrome when comparing myself to AI output?"

Sarah Chen: warmly This is the most important question tonight. Imposter syndrome was already epidemic among engineers — AI has made it worse because now you're comparing yourself to a system that has read all of GitHub. Stop comparing. AI is a tool, like a calculator. You don't feel inferior to a calculator because it multiplies faster than you. Focus on what calculators can't do: decide what to calculate, verify the result makes sense, and explain why it matters.

Dr. Ming-Zhe Chen: Empirically: AI doesn't "understand" anything. It pattern-matches at superhuman scale. Your understanding — messy, incomplete, and hard-won — is genuine in a way that AI output is not. When you debug a problem and finally understand the root cause, that understanding changes your mental model permanently. AI has no mental model. It has statistical correlations. Don't confuse its speed with your inadequacy.

William Yeh: The best antidote to imposter syndrome is to solve a genuinely hard problem and document it. When you look back at your engineering journal and see five problems that nobody — not AI, not your colleagues — could solve except you, the imposter feeling evaporates. It's evidence-based confidence.

Kent Beck: gently Everyone feels like an imposter sometimes. I still do, and I've been doing this for 40 years. The trick is: imposter syndrome means you're in a growth zone. If you never feel it, you're not being challenged enough. Reframe it from "I'm not good enough" to "I'm exactly where growth happens."

加映回合:觀眾快問快答

主持人: 結束前,我們有一些來自觀眾的快問快答。每位專家快速回答——最多 2-3 句。

觀眾提問 1:「我該學第二個程式語言,還是在 JavaScript/TypeScript 上更深入?」

林宏志: 先深入 JS/TS。大多數前端工程師只碰到表面——他們用 async/await 但不理解 Promise、generator 或 microtask 佇列。那個深度現在比廣度更重要。

Sarah Chen: 稍微不同意。學 Rust 或 Go,即使是表面性的,會給你一個系統層級的心智模型,讓你成為更好的前端工程師。你開始在更底層理解效能。但這是第 7-12 個月的投資,不是第 1 個月。

葉大師: 學任何能讓你「更有效地寫規格和測試」的語言。如果是 strict mode 的 TypeScript,很好。如果是學 SQL 來更好地理解你的資料契約,更好。語言本身不重要,重要的是你用它做什麼。

觀眾提問 2:「考 AWS/GCP 證照值得嗎?」

張雅晴: 直白地 對台灣的前端工程師?不值得。證照能打動大公司的 HR 篩選者,但工程主管在意的是展示出的技能,不是徽章。把那個時間花在貢獻開源或建立作品集專案上。

Kent Beck: 證照測試你的記憶能力——那正是 AI 讓它變得毫無價值的能力。投資在不可教的技能上:判斷力、品味、溝通。

Dr. 陳明哲: 一個例外:如果你想轉向 DevOps 或平台工程角色,Kubernetes 證照(CKA)有真正的產業信號。但對前端來說,我同意雅晴。

觀眾提問 3:「我在考慮念資工碩士。好主意嗎?」

Dr. 陳明哲: 取決於你的目標。如果你想做 AI/ML 研究或進入重視學歷的公司(Google、學術界),是的。如果你想成為更好的前端工程師,不——2 年業界經驗加上刻意自學的 ROI 比碩士學位更高。來自教授的偏見回答,我知道,但我在誠實。

Sarah Chen: 在 Meta,碩士學位讓起薪增加約 5-10%,但在第 2 年之後對升遷速度沒有可測量的影響。經驗和展示的影響力重要得多。

林宏志: 省下學費。用它來參加國際研討會、買課程,以及資助 6 個月減少工時的密集自學。那是更好的投資。

觀眾提問 4:「想要升級的前端工程師,最好的單一資源是什麼?」

Kent Beck: 我的誠實回答:讀「別人的程式碼」。不是教程。不是課程。選一個你在用的函式庫——React、Next.js、一個狀態管理函式庫——然後讀原始碼。我共事過最優秀的工程師,都是貪婪的程式碼閱讀者。

林宏志: Chrome DevTools 文件。認真的。大多數前端工程師只用了 DevTools 10% 的功能。深入學習 Performance 面板、Memory 面板和 Lighthouse 會讓你的除錯能力提升 10 倍。

葉大師: Gene Kim 的 The Phoenix Project。它是一本關於維運和瓶頸管理的小說,但 TOC 原則直接適用於你如何思考自己的職涯瓶頸。

張雅晴: LinkedIn。不是為了滑——是為了研究。找 10 位你欽佩的公司的資深前端工程師。研究他們的職涯軌跡。他們在第 2 年做了什麼?第 5 年?逆向工程他們的路徑。

Sarah Chen: Web.dev 和 MDN 文件。這些是由瀏覽器工程師撰寫的,包含關於 web 平台如何運作的最深入、最準確的資訊。如果你只把 MDN 當 API 參考用,你錯過了它 80% 的價值。

Dr. 陳明哲: 學術論文。我知道聽起來很嚇人,但從 ICSE、ESEC/FSE 和 CHI 關於開發者生產力和軟體工程的論文開始。這些給你思考工作的框架,是任何部落格文章或課程都提供不了的。用 Semantic Scholar 找好入門的 survey 論文。

觀眾提問 5:「我該做 side project 還是貢獻開源?」

Kent Beck: 兩個都做,但開源優先。Side project 證明你能建造東西。開源證明你能協作、處理回饋、在複雜的既有程式碼庫中工作——那才是真正工程工作的樣子。

林宏志: 想探索技術就做 side project。想成長為工程師就做開源。它們鍛鍊不同的肌肉。在 25 歲,我會優先開源。

張雅晴: 從招聘角度,一個有意義的開源貢獻(一個有實質討論的合併 PR)勝過十個沒人用的 side project。貢獻展示協作力;side project 展示主動性。你兩個都需要,但貢獻更稀有也更令人印象深刻。

觀眾提問 6:「當我拿自己跟 AI 輸出比較時,怎麼處理冒牌者症候群?」

Sarah Chen: 溫暖地 這是今晚最重要的問題。冒牌者症候群在工程師中已經是流行病了——AI 讓它更嚴重了,因為現在你在跟一個讀過整個 GitHub 的系統比較。停止比較。AI 是工具,像計算機。你不會因為計算機乘法比你快就覺得自卑。專注在計算機做不到的事上:決定要計算什麼、驗證結果是否合理、解釋為什麼它重要。

Dr. 陳明哲: 實證上:AI 不「理解」任何東西。它在超人的規模上做模式匹配。你的理解——混亂的、不完整的、辛苦得來的——以一種 AI 輸出不具備的方式是真實的。當你 debug 一個問題終於理解了根因,那個理解永久地改變了你的心智模型。AI 沒有心智模型。它有統計相關性。不要把它的速度跟你的不足混淆。

葉大師: 冒牌者症候群最好的解藥是解決一個真正困難的問題然後記錄下來。當你回顧你的工程日誌,看到五個沒有人——不是 AI、不是你的同事——除了你能解決的問題,冒牌者的感覺就會消散。那是以證據為基礎的信心。

Kent Beck: 溫和地 每個人有時候都覺得自己是冒牌者。我到現在還會,而且我已經做了 40 年。竅門是:冒牌者症候群意味著你在成長區。如果你從未感受到它,你就沒有被充分挑戰。把它從「我不夠好」重新框架為「我正處在成長發生的地方」。


Closing: Final Advice and Comprehensive Vote

Moderator: We've covered a lot of ground — threat assessment, moat skills, the mid-level trap, concrete action plans, and Taiwan-specific dynamics. Let's close with each expert giving their one-sentence final advice to this 25-year-old. Then we'll do a final comprehensive vote.

William Yeh: Stop identifying as "a person who writes code" and start identifying as "the person who ensures the right code gets written, verified, and governed" — that identity shift is worth more than any technical skill you'll learn this year.

Ya-Ching Chang: Rewrite your resume this weekend, join one community by next week, and apply to one company with an engineering-driven culture by next month — the best strategy in the world is worthless without execution, and execution starts with the smallest possible step today.

Kent Beck: Write your tests before your code, ask "why" before you implement, and ship something imperfect every week — the discipline of defining correctness, questioning assumptions, and iterating fast will serve you for the next 40 years of your career, regardless of what AI can do.

Hong-Zhi Lin: Keep your hands dirty, your curiosity sharp, and your engineering journal updated — the engineers who survive every technological wave are the ones who never stop building real things and never stop asking how they actually work under the hood.

Sarah Chen: Build one thing deep enough that people come to you for answers, build one relationship strong enough that someone will vouch for you when opportunities arise, and build one workflow efficient enough that AI makes you 10x better instead of replaceable — depth, relationships, and leverage are your three pillars.

Dr. Ming-Zhe Chen: Embrace the struggle — every bug you debug yourself instead of asking AI, every spec you write from scratch instead of generating, every design decision you agonize over instead of outsourcing — those struggles are building the neural pathways of expertise that no AI can shortcut and no market shift can devalue.

Moderator: Powerful advice. But before we vote, let me provoke one last exchange. William, you and Hong-Zhi have been sparring all night. Can you each acknowledge one thing the other said that changed your thinking?

William Yeh: pauses Fair question. Hong-Zhi's point about the a11y complexity — 47 screen reader combinations — that was a wake-up call. I've been focused on the macro trend of bottleneck migration, but I was underestimating the sheer technical complexity of frontend-specific edge cases. My revised position: the bottleneck HAS migrated for routine frontend work, but for specialized frontend domains like accessibility, performance, and complex interaction patterns, deep technical skill remains a bottleneck in its own right. It's not either/or — it's a spectrum. Where you sit on the spectrum determines your strategy.

Hong-Zhi Lin: nods And William's four accountability questions genuinely reshaped how I think about my own career. I've been focused on being the best problem solver in the room, but I haven't been deliberate about owning the definition of what problems to solve. Starting next week, I'm going to start writing behavioral specs for every feature I work on — even if nobody asked me to. Because William is right: the engineer who defines the spec has more leverage than the engineer who implements it.

Sarah Chen: smiles This is the most valuable thing that happened tonight. Not the frameworks, not the data — the willingness to update your position based on evidence. That's what genuine seniority looks like.

Kent Beck: And that's what open-problem solving looks like. Closed-problem thinkers defend their position. Open-problem thinkers update it. If the 25-year-old takes nothing else from tonight, take this: be the person who changes their mind when the evidence warrants it. That's not weakness — that's the most valuable engineering skill there is.

Dr. Ming-Zhe Chen: I'll share one data point that should give the 25-year-old hope. In our longitudinal study of 300 engineers over 5 years, the single strongest predictor of career success — defined as reaching senior/staff level, compensation growth, and job satisfaction — was not IQ, not educational background, not starting company prestige. It was growth mindset, as measured by their willingness to seek feedback, change approaches when evidence contradicts their assumptions, and invest in learning outside their comfort zone. That correlates with career success at r=0.67, which is remarkably high for a behavioral variable.

Ya-Ching Chang: And from the hiring side, I'll confirm that. The engineers I've placed who succeeded most dramatically in their new roles weren't the most technically brilliant. They were the most adaptable — the ones who asked the most questions in their first month, who sought feedback proactively, and who weren't afraid to say "I don't know, but I'll figure it out." Teachability beats talent every time, especially in a landscape that's changing as fast as ours.

Moderator: Now, the final comprehensive vote.

Final Vote: What is the overall best strategy for this 25-year-old?

ExpertStrategy SummaryKey Emphasis
William YehBecome a spec owner and governance expert; use TOC to identify where to add valueIdentity shift from coder to governor
Ya-Ching ChangOptimize career positioning: right resume, right company, right roleExecution and market awareness
Kent BeckMaster fundamentals (TDD, design, testing); let principles outlast toolsTimeless discipline over trendy skills
Hong-Zhi LinGo deep on browser fundamentals; build a track record of solving hard problemsTechnical depth as the ultimate insurance
Sarah ChenT-shaped development: one deep vertical + broad AI literacy + international exposureBalanced growth with global optionality
Dr. Ming-Zhe ChenDeliberate practice through struggle; build debugging intuition; contribute to communityEvidence-based skill development

Consensus Points (where all 6 experts agree):

  1. The threat is real but the timeline is debatable — AI will reshape frontend engineering, but complete replacement of mid-level engineers is not imminent
  2. Coding speed is no longer a differentiator — the market increasingly values everything around the code: specs, testing, verification, communication
  3. The mid-level trap is dangerous — having a Senior title without senior-level problem-solving skills is a precarious position, especially as AI raises the bar
  4. Active community participation accelerates growth — Taiwan's tight-knit developer community is an underutilized asset
  5. AI literacy is non-negotiable — not using AI tools in 2026 is like not using Git in 2012
  6. The 25-year-old has a window of 2-5 years — not infinite, but enough to reposition with deliberate effort

Disagreement Points (where the panel remains split):

  1. Timeline urgency — William and Ya-Ching say 2-3 years; Hong-Zhi and Dr. Chen say 4-5+ years
  2. Fundamentals vs tools first — Kent says always fundamentals; Sarah says interleave from day one
  3. Stay vs go — consensus is "stay for now" but the urgency of preparing international options varies
  4. Technical depth vs breadth — Hong-Zhi advocates deep technical skills; William advocates governance skills; Sarah says both via T-shape

Scenario Analysis: Three Possible Futures

Scenario A: AI acceleration continues (probability: 40%)

LLM capabilities improve by another 50%+ in 18 months. AI can handle cross-component coordination, basic design decisions, and even some testing. Impact on this 25-year-old: urgent need to move to Layer 3 (problem definition and governance). Companies cut mid-level frontend roles by 30-40%. The engineer who has invested in spec writing, verification, and AI orchestration thrives. The engineer who only invested in deeper React knowledge struggles, because the AI can now do most of what they do.

Scenario B: AI plateau (probability: 35%)

Progress slows due to data scaling limits, regulatory constraints, or fundamental architecture limitations. AI tools improve incrementally but don't reach the "cross-component coordination" threshold. Impact: more breathing room. Both deep technical skills and governance skills remain valuable. The Taiwan market stabilizes. The 25-year-old has 5+ years to reposition. The T-shaped strategy is optimal here — go deep in one area while building broad AI literacy.

Scenario C: Wild card — AI breakthrough in reasoning (probability: 25%)

A fundamental architecture shift (beyond transformers) enables AI to genuinely reason about complex systems, including frontend state management, accessibility implications, and cross-browser behavior. Impact: transformative disruption. Only the highest-level skills remain human-domain: business strategy, user research, ethical governance. The 25-year-old's best hedge: move as far upstream as possible — become the person who defines what to build, not how to build it.

William Yeh: Notice that in all three scenarios, governance and spec definition skills are valuable. That's why I'm advocating for them — they're the robust strategy across multiple futures.

Hong-Zhi Lin: And notice that in scenarios A and B — which together have 75% probability — deep technical skills remain valuable. Only in the wild card scenario do they become less relevant. I'll take those odds.

Kent Beck: The meta-lesson: when the future is uncertain, invest in skills that are valuable across multiple scenarios. Testing discipline, design thinking, and communication are valuable in all three scenarios. That's what "principles outlast tools" means in practice.

Sarah Chen: And the meta-meta-lesson: the ability to read scenarios and adapt is itself the most important skill. The engineer who can assess which scenario is unfolding and adjust their strategy in real time — that's the one who thrives regardless.

總結:最終建議與綜合投票

主持人: 我們涵蓋了很多內容——威脅評估、護城河技能、中階陷阱、具體行動計畫和台灣特定的動態。讓我們以每位專家給這位 25 歲工程師的一句最終建議來收尾。然後做最終的綜合投票。

葉大師: 不要再認同自己是「寫程式的人」,開始認同自己是「確保正確的程式碼被撰寫、驗證和治理的人」——那個身分認同轉變比你今年學到的任何技術技能都更有價值。

張雅晴: 這個週末重寫你的履歷,下週加入一個社群,下個月投一家有工程驅動文化的公司——世界上最好的策略如果沒有執行就毫無價值,而執行從今天最小的可能步驟開始。

Kent Beck: 在寫程式碼前先寫測試,在實作前先問「為什麼」,每週交付一個不完美的東西——定義正確性、質疑假設和快速迭代的紀律,會在你職涯的未來 40 年中持續為你服務,無論 AI 能做什麼。

林宏志: 保持雙手沾滿泥巴、好奇心銳利、工程日誌持續更新——在每一波技術浪潮中存活下來的工程師,都是那些從未停止建構真實事物、從未停止追問它們底層如何運作的人。

Sarah Chen: 建構一個足夠深入讓人來找你求解的東西,建立一段足夠堅固讓人在機會出現時為你背書的關係,建立一個足夠高效讓 AI 使你 10 倍更好而非可替代的工作流程——深度、關係和槓桿是你的三根支柱。

Dr. 陳明哲: 擁抱掙扎——每一個你自己 debug 而不是問 AI 的 bug、每一份你從頭撰寫而不是生成的規格、每一個你苦惱而不是外包的設計決策——那些掙扎正在建構專業的神經通路,沒有 AI 能走捷徑,沒有市場轉變能貶值。

主持人: 強有力的建議。但在投票之前,讓我挑起最後一次交鋒。葉大師,你和宏志整晚都在交鋒。你們各自能承認對方說的一件改變了你思考的事嗎?

葉大師: 停頓 好問題。宏志關於 a11y 複雜度的觀點——47 種螢幕閱讀器組合——那是一個警鐘。我一直專注在瓶頸遷移的宏觀趨勢,但我低估了前端特定邊界案例的純技術複雜度。我修正後的立場:對例行前端工作,瓶頸「已經」遷移了,但對像無障礙、效能和複雜互動模式這樣的專業前端領域,深度技術技能本身仍然是一個瓶頸。不是二擇一——是一個光譜。你在光譜上的位置決定了你的策略。

林宏志: 點頭 而葉大師的四個問責問題真的重塑了我對自己職涯的思考方式。我一直專注在成為房間裡最好的問題解決者,但我沒有刻意去擁有問題「定義」的所有權。從下週開始,我要為我做的每一個功能撰寫行為規格——即使沒人要求我這樣做。因為葉大師是對的:定義規格的工程師比實作規格的工程師有更多槓桿。

Sarah Chen: 微笑 這是今晚發生的最有價值的事。不是框架,不是數據——是基於證據更新自己立場的意願。那才是真正的資深看起來的樣子。

Kent Beck: 那就是開放問題解決看起來的樣子。封閉問題思考者捍衛他們的立場。開放問題思考者更新它。如果這位 25 歲的工程師今晚只帶走一件事,帶走這個:成為那個當證據支持時就改變主意的人。那不是軟弱——那是最有價值的工程技能。

Dr. 陳明哲: 我分享一個應該給這位 25 歲工程師希望的數據點。在我們追蹤 300 位工程師 5 年的縱向研究中,職涯成功的最強預測因子——定義為達到 senior/staff 等級、薪資成長和工作滿意度——不是 IQ,不是教育背景,不是起始公司的聲望。是成長心態,以他們尋求回饋、在證據與假設矛盾時改變方法、以及投資在舒適圈之外學習的意願來衡量。這與職涯成功的相關係數是 r=0.67,對行為變項來說是非常高的。

張雅晴: 從招聘端,我確認這一點。我安置的工程師中,在新角色上最成功的不是技術上最聰明的。他們是最有適應力的——那些在第一個月問最多問題、主動尋求回饋、不怕說「我不知道,但我會搞清楚」的人。可教性每次都贏過天賦,尤其是在像我們這樣快速變化的環境中。

主持人: 現在,最終的綜合投票。

最終投票:對這位 25 歲工程師最佳的整體策略是什麼?

專家策略摘要關鍵重點
葉大師成為規格擁有者和治理專家;用 TOC 識別在哪裡增加價值從寫程式者到治理者的身分轉變
張雅晴優化職涯定位:對的履歷、對的公司、對的角色執行力和市場意識
Kent Beck精通基礎(TDD、設計、測試);讓原則比工具長壽永恆的紀律勝過流行的技能
林宏志深入瀏覽器基礎;建立解決困難問題的紀錄技術深度作為終極保險
Sarah ChenT 型發展:一個深度垂直 + 廣泛 AI 素養 + 國際曝光平衡成長加上全球選擇權
Dr. 陳明哲透過掙扎刻意練習;建立除錯直覺;貢獻社群以實證為基礎的技能發展

共識點(六位專家都同意的):

  1. 威脅是真實的但時間表可辯論 —— AI 將重塑前端工程,但完全取代中階工程師不是迫在眉睫的
  2. 寫程式碼的速度不再是差異化因素 —— 市場越來越重視程式碼「周圍」的一切:規格、測試、驗證、溝通
  3. 中階陷阱很危險 —— 有 Senior 職稱但沒有資深級問題解決技能,是一個危險的位置,尤其當 AI 提高了門檻
  4. 積極的社群參與加速成長 —— 台灣緊密的開發者社群是一項未被充分利用的資產
  5. AI 素養是不可協商的 —— 2026 年不使用 AI 工具就像 2012 年不使用 Git
  6. 這位 25 歲的工程師有 2-5 年的窗口 —— 不是無限的,但足以透過刻意努力重新定位

分歧點(座談成員仍然意見不一的):

  1. 時間緊迫性 —— 葉大師和張雅晴說 2-3 年;林宏志和陳明哲說 4-5 年以上
  2. 基礎 vs 工具優先 —— Kent 說永遠基礎優先;Sarah 說從第一天就交錯學習
  3. 留下 vs 出去 —— 共識是「暫時留下」但準備國際選項的緊迫性因人而異
  4. 技術深度 vs 廣度 —— 林宏志主張深度技術技能;葉大師主張治理技能;Sarah 說透過 T 型兩者兼得

情境分析:三種可能的未來

情境 A:AI 加速持續(機率:40%)

LLM 能力在 18 個月內再提升 50% 以上。AI 能處理跨元件協調、基本設計決策,甚至一些測試。對這位 25 歲工程師的影響:迫切需要移動到第三層(問題定義和治理)。公司砍中階前端職缺 30-40%。投資在規格撰寫、驗證和 AI 編排上的工程師茁壯。只投資在更深 React 知識上的工程師掙扎,因為 AI 現在能做他們大部分做的事。

情境 B:AI 高原期(機率:35%)

進步因資料規模限制、法規約束或根本的架構局限而減緩。AI 工具逐步改善但沒有達到「跨元件協調」的門檻。影響:更多喘息空間。深度技術技能和治理技能都仍然有價值。台灣市場穩定。這位 25 歲的工程師有 5 年以上的時間重新定位。T 型策略在這裡是最佳的——在一個領域深入,同時建立廣泛的 AI 素養。

情境 C:黑天鵝——AI 推理突破(機率:25%)

一個根本的架構轉變(超越 transformer)使 AI 能真正推理複雜系統,包括前端狀態管理、無障礙影響和跨瀏覽器行為。影響:變革性的破壞。只有最高層級的技能仍然屬於人類領域:商業策略、使用者研究、倫理治理。這位 25 歲工程師的最佳對沖:盡可能往上游移動——成為定義「做什麼」而非「怎麼做」的人。

葉大師: 注意在三種情境中,治理和規格定義技能都是有價值的。這就是為什麼我為它們辯護——它們是跨多種未來的穩健策略。

林宏志: 也注意在情境 A 和 B——合計有 75% 的機率——深度技術技能仍然有價值。只有在黑天鵝情境中它們才變得不太相關。我接受那個機率。

Kent Beck: 元教訓:當未來不確定時,投資在跨多種情境都有價值的技能上。測試紀律、設計思維和溝通在三種情境中都有價值。這就是「原則比工具長壽」在實踐中的意義。

Sarah Chen: 元元教訓:「閱讀情境並適應」的能力本身就是最重要的技能。能評估哪種情境正在展開並即時調整策略的工程師——那就是無論如何都會茁壯的人。


Appendix: The 12-Month Roadmap Summary

For the 25-year-old frontend engineer who wants a single, consolidated plan:

Month 1: Foundation Reset

  • Rewrite resume in PAR format (Problem → Action → Result)
  • Start engineering journal to document hard problems solved
  • Join one Taiwan developer community (React Taipei, JSDC, or Frontend Developers Taiwan)

Months 2-3: Spec & Testing Muscle

  • For every feature ticket, write behavioral specs before implementation
  • Begin TDD practice: tests first on at least one feature per sprint
  • Set up structured AI tool experiment: weekly comparison of manual vs AI-assisted development

Months 4-6: Depth Sprint

  • Pick ONE vertical (a11y, performance, design systems, etc.) and go deep
  • Read the relevant spec (W3C, browser engine documentation)
  • Investigate 5 open issues in a major open-source frontend project to build debugging skills

Months 7-9: Visibility & Network

  • Give your first community talk (even a 5-minute lightning talk counts)
  • Build your AI evaluation framework and share it with your team
  • Identify 2-3 mentor candidates and initiate mentorship conversations

Months 10-12: Strategic Positioning

  • Research 10-15 target companies with engineering-driven culture
  • Apply with tailored pitches demonstrating impact, AI literacy, and depth
  • Evaluate: are you solving more open problems than 6 months ago? If not, consider changing environments

Throughout all 12 months:

  • Update engineering journal weekly
  • Practice written communication (design docs, PR descriptions, RFC proposals)
  • Track your ratio of open problems vs closed problems — aim for at least 30% open by month 12

Appendix B: Key Frameworks Referenced in This Discussion

1. Theory of Constraints (TOC) — Bottleneck Migration Model

Source: Eliyahu Goldratt, The Goal (1984); applied to software by William Yeh

Core idea: Every system has one bottleneck that limits throughput. Optimizing non-bottlenecks is waste. When a bottleneck is relieved, a new one emerges.

Applied to frontend careers:

EraBottleneckImplication
Pre-2020Writing codeCompanies hire more developers
2020-2024Writing code fasterCompanies adopt frameworks, hire bootcamp grads
2024-2026Verifying AI-generated codeCompanies need engineers who can spec, test, and govern
2026+Defining what to buildCompanies need engineers who understand business problems

Self-assessment question: Where does your bottleneck value lie? If it's at "writing code," you're at risk. If it's at "defining specs" or "verifying correctness," you're positioned well.

2. Open vs Closed Problems Framework

Source: Herbert Simon, Sciences of the Artificial (1969); adapted by William Yeh

Closed problems: Clear input, clear expected output. Can be solved algorithmically. Examples: "Convert this Figma design to React components." "Add pagination to this API response." "Fix this CSS alignment bug."

Open problems: Ambiguous input, no single correct output. Requires human judgment, stakeholder negotiation, and domain knowledge. Examples: "Users are churning during onboarding — why?" "Should we build this feature as a modal or a new page?" "How do we make our app accessible without breaking the existing UX?"

Key insight: AI excels at closed problems and struggles with open problems. Career safety correlates with the proportion of open problems you solve.

3. The Four Accountability Questions

Source: William Yeh, adapted from governance frameworks

For any AI-assisted development workflow, ask:

  1. Who defines the spec? The behavioral contract that determines what "correct" means.
  2. Who verifies correctness? Not "does it compile" — does it actually work for all intended use cases?
  3. Who controls permissions? What can the AI modify, access, or deploy?
  4. Who bears consequences? When things break in production, whose name is on the incident report?

Career implication: If you can own all four, you're irreplaceable. If you own none, you're interchangeable with an AI tool.

4. T-Shaped Skills Model

Source: David Guest (1991); popularized by IDEO and adapted by Sarah Chen

Vertical bar: Deep expertise in one specific domain. For frontend: choose from accessibility, performance, design systems, complex state management, real-time UIs, or animation/interaction.

Horizontal bar: Broad literacy across adjacent domains. Includes: AI tool literacy, basic UX research, DevOps/CI-CD awareness, backend API design understanding, product management basics, and technical writing.

Depth metric: You're "deep enough" when people come to you with questions in that domain. You're "broad enough" when you can have productive conversations with any team member in adjacent roles.

5. Impact Radius Model

Source: Meta engineering levels framework, described by Sarah Chen

LevelImpact RadiusKey Behaviors
JuniorSelfCompletes own tasks reliably
MidTeamInfluences team decisions, reviews code meaningfully
SeniorMulti-teamDrives cross-team technical decisions, sets standards
StaffOrganizationInfluences engineering strategy, mentors seniors
PrincipalIndustryShapes industry practices and open-source ecosystems

Self-assessment: What's the largest scope that your decisions have influenced in the past 6 months?

6. Dreyfus Model of Skill Acquisition

Source: Hubert Dreyfus and Stuart Dreyfus (1980)

StageCharacteristicsFrontend Example
NoviceFollows rules rigidlyFollows tutorial code exactly
Advanced BeginnerRecognizes patternsKnows when to use useEffect vs useMemo
CompetentPlans and prioritizesCan architect a feature independently
ProficientSees situations holisticallySenses that an approach "smells wrong" before articulating why
ExpertActs from intuitionImmediately identifies the root cause of a complex bug

AI threat level by stage: Novice and Advanced Beginner tasks are most automatable. Proficient and Expert tasks require intuition that AI cannot replicate. The dangerous zone is Competent — enough skill to be productive, but not enough intuition to be irreplaceable.

7. Clean Code as AI Infrastructure

Source: Jain et al., ICLR 2024

Key findings:

  • Well-structured codebases showed 2.3x higher LLM code generation success rates
  • Consistent naming conventions improved AI code accuracy by 34%
  • Comprehensive test suites served as implicit specs, guiding AI toward correct implementations
  • Clear module boundaries reduced AI-generated cross-module errors by 48%

Implication: Investing in code quality is not just craftsmanship — it's optimizing the AI's working environment. The engineer who maintains clean code is simultaneously improving their own productivity and the AI's effectiveness.

8. Semantic Non-Determinism of LLMs

Source: Dr. Ming-Zhe Chen, NTU research

Core concept: Same prompt → different outputs. This is not a bug — it's a fundamental property of how LLMs generate text through probabilistic token sampling.

Frontend implication: You cannot treat AI-generated code as deterministic. Every output must be verified independently. This creates a permanent need for human verification — the "verification layer" that William Yeh identifies as the new bottleneck.

Practical test results (NTU lab):

  • Isolated component tasks: 78-85% accuracy
  • Cross-component coordination: 31-40% accuracy
  • Ambiguous requirement interpretation: 12-18% accuracy

Books:

  • The Goal — Eliyahu Goldratt (TOC fundamentals)
  • The Phoenix Project — Gene Kim (TOC applied to IT)
  • Test-Driven Development: By Example — Kent Beck
  • Refactoring: Improving the Design of Existing Code — Martin Fowler
  • Don't Make Me Think — Steve Krug (UX fundamentals)
  • Inclusive Design Patterns — Heydon Pickering (a11y)

Online resources:

  • Web.dev (Google's web platform documentation)
  • MDN Web Docs (comprehensive web reference)
  • React source code on GitHub (read the reconciler)
  • Chrome DevTools documentation
  • W3C WCAG 2.1 specification

Communities (Taiwan-specific):

  • React Taipei (Facebook group + meetups)
  • Vue.js Taiwan
  • Frontend Developers Taiwan (Facebook)
  • COSCUP (annual open-source conference)
  • MOPCON (annual mobile/web conference in Kaohsiung)
  • JSDC (JavaScript Developers Conference)
  • g0v (civic tech community — excellent for open-problem practice)

AI tool learning path:

  1. Start with GitHub Copilot or Cursor (lowest barrier to entry)
  2. Learn prompt engineering for code generation (Anthropic's prompt engineering guide)
  3. Practice structured AI evaluation (compare AI output vs manual implementation weekly)
  4. Explore Claude Code or similar agentic tools for complex workflows
  5. Build your own AI evaluation framework and share it with your team

Appendix D: Self-Assessment Checklist

Use this monthly checklist to track your progress from mid-level to senior:

Problem Type Ratio

  • [ ] I track the ratio of open vs closed problems I solve each week
  • [ ] At least 20% of my work involves open problems (ambiguous requirements, design decisions)
  • [ ] I have pushed back on at least one requirement this month with data-backed reasoning
  • [ ] I have proposed an alternative approach to a PM's spec at least once this quarter

Spec & Verification Skills

  • [ ] I write behavioral specs (not just implementation notes) before coding
  • [ ] I define acceptance criteria that include edge cases and error states
  • [ ] I have a testing strategy for my features that goes beyond "it renders correctly"
  • [ ] I can explain the four accountability questions and how they apply to my current project

Technical Depth

  • [ ] I can explain how my framework (React, Vue, etc.) works under the hood, not just how to use it
  • [ ] I have investigated at least one browser-level behavior this month (rendering pipeline, event loop, etc.)
  • [ ] I can debug a performance issue using browser DevTools without relying on AI
  • [ ] I have read source code of a library I use this month

AI Literacy

  • [ ] I have used AI tools for at least one task this week and documented what worked/didn't
  • [ ] I can identify common failure modes of AI-generated frontend code
  • [ ] I have an evolving framework for evaluating AI output quality
  • [ ] I can explain to a non-technical person why AI can't fully replace frontend engineers (yet)

Impact Radius

  • [ ] My code reviews contain substantive design feedback, not just "LGTM"
  • [ ] I have influenced at least one decision that affected people beyond my immediate team
  • [ ] I have mentored or helped a more junior engineer this month
  • [ ] I have written at least one design document or technical RFC this quarter

Career Positioning

  • [ ] My resume uses PAR format (Problem → Action → Result) for all bullet points
  • [ ] I have attended at least one community event or meetup this month
  • [ ] I have updated my engineering journal with at least 4 entries this month
  • [ ] I know my "T-shape" — what I'm going deep on, and what I'm building broad literacy in

Scoring:

  • 0-8 checks: You're at the beginning. Focus on building one habit at a time.
  • 9-16 checks: You're making progress. The mid-to-senior transition is underway.
  • 17-22 checks: You're operating at a genuine senior level. Start thinking about Staff trajectory.
  • 23-26 checks: You're ready for the next level. Time to expand your impact radius.

Appendix E: Conversation Starters for Your Next 1:1 with Your Manager

If you want to start shifting from closed-problem executor to open-problem co-owner, use these in your next 1:1:

  1. "I've been thinking about how AI tools are changing our workflow. Can I run a structured experiment where I document AI-assisted vs manual development for a sprint, and present the findings to the team?"

  2. "I'd like to start writing behavioral specs before implementation. Would you be open to me adding spec review as a step in our feature development process?"

  3. "I've noticed that [specific area — a11y, performance, design system] is an area where our team doesn't have deep expertise. I'd like to invest in becoming our go-to person for that. Can we discuss how to create space for that?"

  4. "I want to grow toward solving more open-ended problems. Are there upcoming features where I could be involved earlier in the product definition process, rather than just receiving the final spec?"

  5. "I've been documenting the hard problems I solve in an engineering journal. I'd love to share some of these in our team retro — would that be valuable?"

These questions accomplish two things: they signal growth mindset to your manager, and they create concrete opportunities to practice open-problem skills.

Appendix F: Key Numbers Referenced in This Discussion

MetricValueSource
Junior frontend job postings decline12 → 4 → 1 (2024-2026)Ya-Ching Chang, direct data
Hiring managers planning to reduce junior frontend71%Ya-Ching Chang, survey of 38 managers
AI productivity gain on routine tasks40-60%Sarah Chen, Meta internal data
Production incident increase with AI-assisted junior code28%Sarah Chen, Meta internal data
AI accuracy on isolated component tasks78-85%Dr. Ming-Zhe Chen, NTU lab
AI accuracy on cross-component tasks31-40%Dr. Ming-Zhe Chen, NTU lab
AI accuracy on ambiguous requirement tasks12-18%Dr. Ming-Zhe Chen, NTU lab
AI improvement from GPT-3.5 to Claude 3.5 on frontend benchmarks47% in 18 monthsDr. Ming-Zhe Chen, NTU lab
Companies rating "debugging" as critical skill89%Dr. Ming-Zhe Chen, ICSE 2025 survey
Companies rating "writing new code" as critical34%Dr. Ming-Zhe Chen, ICSE 2025 survey
LLM generativity improvement with clean code2.3xJain et al., ICLR 2024
Salary negotiation improvement (Taiwan)12-18% over initial offerYa-Ching Chang, direct data
Taiwan frontend consulting ratesNT$3,000-8,000/hourHong-Zhi Lin, personal experience
Growth mindset correlation with career successr=0.67Dr. Ming-Zhe Chen, longitudinal study
Taiwan tech companies with AI tool policies~35%Ya-Ching Chang, industry survey
Resume callback improvement with PAR format~50%Ya-Ching Chang, direct data
Frontend-to-backend salary gap (Taiwan vs Bay Area)3-5xYa-Ching Chang, market data
Taiwan companies with formal IC career ladders~40%Ya-Ching Chang, industry survey
Time to transition from "ticket implementer" to "problem co-definer"6-12 months (deliberate)William Yeh, consulting experience

附錄:12 個月路線圖摘要

給那位想要一份整合計畫的 25 歲前端工程師:

第 1 個月:基礎重置

  • 用 PAR 格式重寫履歷(Problem 問題 → Action 行動 → Result 結果)
  • 開始工程日誌,記錄解決的困難問題
  • 加入一個台灣開發者社群(React Taipei、JSDC 或 Frontend Developers Taiwan)

第 2-3 個月:規格與測試肌肉

  • 對每張功能 ticket,在實作前撰寫行為規格
  • 開始 TDD 練習:每個 sprint 至少一個功能先寫測試
  • 設計結構化 AI 工具實驗:每週比較手動 vs AI 輔助開發

第 4-6 個月:深度衝刺

  • 選「一個」垂直方向(a11y、效能、設計系統等)然後深入
  • 閱讀相關規範(W3C、瀏覽器引擎文件)
  • 研究一個主要開源前端專案中的 5 個開放 issue 以建立除錯技能

第 7-9 個月:能見度與人脈

  • 做你的第一次社群演講(即使是 5 分鐘的 lightning talk 也算)
  • 建立你的 AI 評估框架並分享給你的團隊
  • 找出 2-3 位潛在導師並開始導師對話

第 10-12 個月:策略性定位

  • 研究 10-15 家有工程驅動文化的目標公司
  • 用展示影響力、AI 素養和深度的客製化 pitch 投遞
  • 評估:你解決的開放問題比 6 個月前多嗎?如果不是,考慮換環境

貫穿全部 12 個月:

  • 每週更新工程日誌
  • 練習書面溝通(設計文件、PR 描述、RFC 提案)
  • 追蹤你的開放問題 vs 封閉問題比例——目標是到第 12 個月至少 30% 是開放問題

附錄 B:本次討論中引用的關鍵框架

1. 約束理論(TOC)——瓶頸遷移模型

來源: Eliyahu Goldratt 著 The Goal(1984);由葉大師應用於軟體領域

核心概念: 每個系統都有一個限制產出的瓶頸。優化非瓶頸是浪費。當一個瓶頸被解除,新的就會浮現。

應用於前端職涯:

時代瓶頸影響
2020 前寫程式碼公司僱用更多開發者
2020-2024更快寫程式碼公司採用框架、僱用 bootcamp 畢業生
2024-2026驗證 AI 生成的程式碼公司需要能寫規格、測試和治理的工程師
2026+定義該建構什麼公司需要理解商業問題的工程師

自評問題: 你的瓶頸價值在哪裡?如果在「寫程式碼」,你有風險。如果在「定義規格」或「驗證正確性」,你定位良好。

2. 開放 vs 封閉問題框架

來源: Herbert Simon 著 Sciences of the Artificial(1969);由葉大師改編

封閉問題: 清楚的輸入,清楚的預期輸出。可以用演算法解決。例如:「把這個 Figma 設計轉換成 React 元件。」「為這個 API 回應加分頁。」「修這個 CSS 對齊的 bug。」

開放問題: 模糊的輸入,沒有單一正確的輸出。需要人類判斷力、利益關係人協商和領域知識。例如:「使用者在 onboarding 時流失——為什麼?」「我們該把這個功能做成 modal 還是新頁面?」「如何讓我們的應用無障礙而不破壞現有的 UX?」

關鍵洞察: AI 擅長封閉問題,在開放問題上掙扎。職涯安全性與你解決的開放問題比例相關。

3. 四個問責問題

來源: 葉大師,改編自治理框架

對任何 AI 輔助的開發工作流程,問:

  1. 誰定義規格? 決定「正確」意味著什麼的行為契約。
  2. 誰驗證正確性? 不是「能不能編譯」——它是否真正適用於所有預期的使用情境?
  3. 誰控制權限? AI 能修改、存取或部署什麼?
  4. 誰承擔後果? 生產環境出事時,事故報告上寫的是誰的名字?

職涯影響: 如果你能掌握全部四個,你就不可取代。如果你一個都沒有,你就可以跟 AI 工具互換。

4. T 型技能模型

來源: David Guest(1991);由 IDEO 推廣,由 Sarah Chen 改編

垂直線: 在一個特定領域的深度專業。對前端:從無障礙、效能、設計系統、複雜狀態管理、即時 UI 或動畫/互動中選擇。

水平線: 跨相鄰領域的廣泛素養。包括:AI 工具素養、基本 UX 研究、DevOps/CI-CD 認知、後端 API 設計理解、產品管理基礎和技術寫作。

深度指標: 當人們在那個領域來找你問問題時,你就「夠深了」。當你能跟相鄰角色的任何團隊成員進行有成效的對話時,你就「夠廣了」。

5. 影響半徑模型

來源: Meta 工程等級框架,由 Sarah Chen 描述

等級影響半徑關鍵行為
Junior自己可靠地完成自己的任務
Mid團隊影響團隊決策、有意義地 review 程式碼
Senior多個團隊驅動跨團隊技術決策、設定標準
Staff組織影響工程策略、指導資深工程師
Principal產業塑造產業實踐和開源生態系

自評: 過去 6 個月中,你的決策影響的最大範圍是什麼?

6. Dreyfus 技能習得模型

來源: Hubert Dreyfus 和 Stuart Dreyfus(1980)

階段特徵前端範例
新手嚴格遵循規則完全照教程程式碼抄寫
進階初學者識別模式知道何時用 useEffect vs useMemo
勝任者計劃和排序優先級能獨立架構一個功能
精通者整體性地看待情境在能清楚說出為什麼之前就感覺到一個方法「有味道」
專家基於直覺行動立即識別出複雜 bug 的根因

各階段的 AI 威脅等級: 新手和進階初學者的任務最容易被自動化。精通者和專家的任務需要 AI 無法複製的直覺。危險區是勝任者——足夠的技能來保持生產力,但不夠的直覺來變得不可取代。

7. Clean Code 作為 AI 基礎設施

來源: Jain 等人,ICLR 2024

關鍵發現:

  • 結構良好的程式碼庫顯示 2.3 倍更高的 LLM 程式碼生成成功率
  • 一致的命名慣例提升 AI 程式碼準確率 34%
  • 全面的測試套件作為隱式規格,引導 AI 朝正確的實作方向
  • 清晰的模組邊界減少 AI 生成的跨模組錯誤 48%

影響: 投資程式碼品質不只是工匠精神——它是在優化 AI 的工作環境。維護 clean code 的工程師同時在提升自己的生產力和 AI 的效能。

8. LLM 的語意不確定性

來源: Dr. 陳明哲,台大研究

核心概念: 相同的 prompt → 不同的輸出。這不是 bug——這是 LLM 透過機率性 token 取樣生成文字的基本屬性。

前端影響: 你不能把 AI 生成的程式碼視為確定性的。每個輸出都必須獨立驗證。這創造了對人類驗證的永久需求——葉大師識別為新瓶頸的「驗證層」。

實測結果(台大實驗室):

  • 獨立元件任務:78-85% 準確率
  • 跨元件協調:31-40% 準確率
  • 模糊需求解讀:12-18% 準確率

附錄 C:推薦閱讀與資源

書籍:

  • The Goal —— Eliyahu Goldratt(TOC 基礎)
  • The Phoenix Project —— Gene Kim(TOC 應用於 IT)
  • Test-Driven Development: By Example —— Kent Beck
  • Refactoring: Improving the Design of Existing Code —— Martin Fowler
  • Don't Make Me Think —— Steve Krug(UX 基礎)
  • Inclusive Design Patterns —— Heydon Pickering(a11y)

線上資源:

  • Web.dev(Google 的 web 平台文件)
  • MDN Web Docs(全面的 web 參考)
  • React GitHub 原始碼(讀 reconciler)
  • Chrome DevTools 文件
  • W3C WCAG 2.1 規範

社群(台灣特定):

  • React Taipei(Facebook 社團 + 聚會)
  • Vue.js Taiwan
  • Frontend Developers Taiwan(Facebook)
  • COSCUP(年度開源研討會)
  • MOPCON(年度行動/網頁研討會,高雄)
  • JSDC(JavaScript 開發者大會)
  • g0v(公民科技社群——開放問題練習的絕佳場所)

AI 工具學習路徑:

  1. 從 GitHub Copilot 或 Cursor 開始(進入門檻最低)
  2. 學習程式碼生成的 prompt engineering(Anthropic 的 prompt engineering 指南)
  3. 練習結構化 AI 評估(每週比較 AI 輸出 vs 手動實作)
  4. 探索 Claude Code 或類似的 agentic 工具處理複雜工作流程
  5. 建立你自己的 AI 評估框架並分享給你的團隊

附錄 D:自我評估清單

每月用這個清單追蹤你從中階到資深的進展:

問題類型比例

  • [ ] 我每週追蹤解決的開放 vs 封閉問題比例
  • [ ] 我的工作至少 20% 涉及開放問題(模糊需求、設計決策)
  • [ ] 我這個月至少用數據支持的推理推回了一個需求
  • [ ] 我這一季至少一次對 PM 的規格提出了替代方案

規格與驗證技能

  • [ ] 我在寫程式碼前撰寫行為規格(不只是實作筆記)
  • [ ] 我定義的驗收標準包含邊界案例和錯誤狀態
  • [ ] 我的功能有超越「它能正確渲染」的測試策略
  • [ ] 我能解釋四個問責問題以及它們如何適用於我目前的專案

技術深度

  • [ ] 我能解釋我的框架(React、Vue 等)底層如何運作,不只是如何使用
  • [ ] 我這個月至少調查了一個瀏覽器層級的行為(渲染管線、事件迴圈等)
  • [ ] 我能用瀏覽器 DevTools debug 效能問題而不依賴 AI
  • [ ] 我這個月讀了一個我使用的函式庫的原始碼

AI 素養

  • [ ] 我這週至少用 AI 工具做了一個任務並記錄了什麼有效/無效
  • [ ] 我能識別 AI 生成前端程式碼的常見失敗模式
  • [ ] 我有一個持續演化的框架來評估 AI 輸出品質
  • [ ] 我能向非技術人員解釋為什麼 AI(還)不能完全取代前端工程師

影響半徑

  • [ ] 我的 code review 包含實質的設計回饋,不只是「LGTM」
  • [ ] 我至少影響了一個影響範圍超出我直屬團隊的決策
  • [ ] 我這個月指導或幫助了至少一位較資淺的工程師
  • [ ] 我這一季至少寫了一份設計文件或技術 RFC

職涯定位

  • [ ] 我的履歷所有要點都使用 PAR 格式(Problem → Action → Result)
  • [ ] 我這個月至少參加了一個社群活動或聚會
  • [ ] 我這個月在工程日誌中至少寫了 4 筆記錄
  • [ ] 我知道我的「T 型」——我在深入什麼,以及我在建立什麼方面的廣泛素養

評分:

  • 0-8 項打勾:你在起步階段。專注在一次建立一個習慣。
  • 9-16 項打勾:你在進步中。中階到資深的轉變正在進行。
  • 17-22 項打勾:你在真正的資深水準運作。開始思考 Staff 軌跡。
  • 23-26 項打勾:你已經準備好下一個等級了。是時候擴展你的影響半徑。

附錄 E:跟主管 1:1 時的對話開場白

如果你想從封閉問題執行者轉向開放問題共同擁有者,在你的下次 1:1 中使用這些:

  1. 「我一直在想 AI 工具如何改變我們的工作流程。我可以做一個結構化實驗,記錄一個 sprint 中 AI 輔助 vs 手動開發的比較,然後向團隊呈現結果嗎?」

  2. 「我想開始在實作前撰寫行為規格。你願意讓我把規格審查加入我們的功能開發流程中嗎?」

  3. 「我注意到 [特定領域——a11y、效能、設計系統] 是我們團隊沒有深度專業的領域。我想投資成為我們這方面的首選人。我們能討論如何創造這個空間嗎?」

  4. 「我想成長到解決更多開放式的問題。有沒有即將到來的功能,我可以在產品定義過程中更早參與,而不是只接收最終的規格?」

  5. 「我一直在工程日誌中記錄我解決的困難問題。我很想在我們的團隊 retro 中分享一些——這會有價值嗎?」

這些問題達成兩個目的:它們向你的主管傳達成長心態,並且創造具體的機會來練習開放問題技能。

附錄 F:本次討論中引用的關鍵數據

指標數值來源
初階前端職缺下降12 → 4 → 1(2024-2026)張雅晴,直接數據
計劃縮減初階前端的招聘主管71%張雅晴,38 位主管調查
AI 在例行任務上的生產力提升40-60%Sarah Chen,Meta 內部數據
AI 輔助初階程式碼的生產環境事故增加28%Sarah Chen,Meta 內部數據
AI 在獨立元件任務的準確率78-85%Dr. 陳明哲,台大實驗室
AI 在跨元件任務的準確率31-40%Dr. 陳明哲,台大實驗室
AI 在模糊需求任務的準確率12-18%Dr. 陳明哲,台大實驗室
GPT-3.5 到 Claude 3.5 前端 benchmark 提升18 個月內 47%Dr. 陳明哲,台大實驗室
將「除錯」評為關鍵技能的公司89%Dr. 陳明哲,ICSE 2025 調查
將「寫新程式碼」評為關鍵的公司34%Dr. 陳明哲,ICSE 2025 調查
Clean code 對 LLM 生成能力的提升2.3 倍Jain 等人,ICLR 2024
薪資談判改善(台灣)比初始 offer 高 12-18%張雅晴,直接數據
台灣前端顧問時薪NT$3,000-8,000/小時林宏志,個人經驗
成長心態與職涯成功的相關r=0.67Dr. 陳明哲,縱向研究
有 AI 工具政策的台灣科技公司約 35%張雅晴,產業調查
PAR 格式履歷的面試回覆率提升約 50%張雅晴,直接數據
台灣 vs 灣區前端薪資差距3-5 倍張雅晴,市場數據
有正式 IC 職涯階梯的台灣公司約 40%張雅晴,產業調查
從「ticket 實作者」到「問題共同定義者」的轉型時間6-12 個月(刻意為之)葉大師,顧問經驗

"The best time to plant a tree was 20 years ago. The second best time is now. But the best strategy for planting? Ask six experts — and expect six different answers, all of them partially right." — Moderator's closing remark

「種一棵樹最好的時間是 20 年前。第二好的時間是現在。但種樹的最佳策略?問六位專家——然後預期六個不同的答案,每一個都部分正確。」—— 主持人結語


This roundtable discussion draws on frameworks from William Yeh's article "GenAI 時代的軟體工程師升級之路" (The Software Engineer's Upgrade Path in the GenAI Era), the Theory of Constraints as applied to software development, and research on AI capabilities in frontend engineering from NTU CSIE. The four accountability questions, open/closed problem framework, and Clean Code as AI infrastructure concept are adapted from William Yeh's original analysis. The Jain et al. ICLR 2024 study on LLM generativity and code quality is referenced with permission.

本次圓桌討論引用了葉大師 William Yeh 的文章「GenAI 時代的軟體工程師升級之路」中的框架、應用於軟體開發的約束理論,以及台大資工系關於 AI 在前端工程能力的研究。四個問責問題、開放/封閉問題框架和 Clean Code 作為 AI 基礎設施的概念改編自葉大師的原始分析。Jain 等人 ICLR 2024 關於 LLM 生成能力與程式碼品質的研究經許可引用。