{"id":369,"date":"2026-05-01T07:24:03","date_gmt":"2026-04-30T23:24:03","guid":{"rendered":"http:\/\/www.faiyi.com\/?p=369"},"modified":"2026-05-01T07:24:03","modified_gmt":"2026-04-30T23:24:03","slug":"ai%e5%8a%a8%e6%80%81%e6%af%8f%e6%97%a5%e7%ae%80%e6%8a%a5-2026-05-01","status":"publish","type":"post","link":"http:\/\/www.faiyi.com\/?p=369","title":{"rendered":"AI\u52a8\u6001\u6bcf\u65e5\u7b80\u62a5 2026-05-01"},"content":{"rendered":"<p>\u65e5\u671f\uff1a2026-05-01<\/p>\n<p>\u672c\u671f\u805a\u7126\uff1a\u91cd\u70b9\u5173\u6ce8\u6a21\u578b\u53d1\u5e03\u4e0e release notes\u3001\u5b98\u65b9 engineering blog\u3001AI coding \/ agent \/ SRE\u3001\u8bc4\u6d4b\u699c\u5355\u53d8\u5316\u3001\u5f00\u53d1\u8005\u5b9e\u8df5\u535a\u5ba2\u3001\u6846\u67b6\u751f\u6001\u3001\u5f00\u6e90\u6a21\u578b\u4e0e\u771f\u5b9e\u7528\u6237\u89c6\u89d2\uff1b\u5f53 HN\u3001Reddit\u3001Hugging Face \u7b49\u793e\u533a\u6e90\u53ef\u8bbf\u95ee\u65f6\u4f18\u5148\u7eb3\u5165\u3002<\/p>\n<hr \/>\n<ol>\n<li>\n<p><strong>Introducing Claude Opus 4.7<\/strong>\uff08Anthropic News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u6b63\u5f0f\u53d1\u5e03 Claude Opus 4.7\uff0c\u5728\u9ad8\u7ea7\u8f6f\u4ef6\u5de5\u7a0b\u4efb\u52a1\u4e0a\u8f83\u524d\u4ee3 Opus 4.6 \u6709\u663e\u8457\u63d0\u5347\uff0c\u5c24\u5176\u5728\u590d\u6742\u957f\u7a0b\u4efb\u52a1\u4e2d\u8868\u73b0\u66f4\u4e3a\u4e25\u8c28\u4e00\u81f4\u3002\u65b0\u6a21\u578b\u652f\u6301\u66f4\u9ad8\u5206\u8fa8\u7387\u56fe\u50cf\u8f93\u5165\uff08\u957f\u8fb9\u53ef\u8fbe 2576 \u50cf\u7d20\uff09\uff0c\u5e76\u5728\u4e13\u4e1a\u6587\u6863\u3001\u754c\u9762\u8bbe\u8ba1\u7b49\u521b\u610f\u4efb\u52a1\u4e2d\u5c55\u73b0\u66f4\u4f73\u5ba1\u7f8e\u3002Opus 4.7 \u540c\u6b65\u5f15\u5165\u4e86\u5b9e\u65f6\u7f51\u7edc\u5b89\u5168\u9632\u62a4\u673a\u5236\uff0c\u81ea\u52a8\u62e6\u622a\u9ad8\u98ce\u9669\u7f51\u7edc\u653b\u51fb\u8bf7\u6c42\uff0c\u5b89\u5168\u7814\u7a76\u4eba\u5458\u53ef\u901a\u8fc7 Cyber Verification Program \u7533\u8bf7\u5408\u6cd5\u4f7f\u7528\u6743\u9650\u3002API \u5b9a\u4ef7\u7ef4\u6301\u4e0d\u53d8\uff0c\u540c\u65f6\u65b0\u589e xhigh \u52aa\u529b\u7ea7\u522b\u3001\u4efb\u52a1\u9884\u7b97\u63a7\u5236\u53ca Claude Code \u7684 \/ultrareview \u6307\u4ee4\u7b49\u529f\u80fd\uff0c\u4e3a\u7528\u6237\u63d0\u4f9b\u66f4\u7cbe\u7ec6\u7684\u63a8\u7406\u4e0e\u6210\u672c\u6743\u8861\u9009\u9879\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic released Claude Opus 4.7, delivering notable improvements over Opus 4.6 in advanced software engineering and complex long-horizon tasks. The model features enhanced vision capabilities with support for higher-resolution images up to 2,576 pixels on the long edge. Opus 4.7 introduces real-time cyber safeguards that automatically block high-risk cybersecurity requests, with a Cyber Verification Program available for legitimate security research. Pricing remains unchanged at $5\/$25 per million input\/output tokens.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/news\/claude-opus-4-7\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Featured An update on recent Claude Code quality reports<\/strong>\uff08Anthropic Engineering\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u5de5\u7a0b\u56e2\u961f\u53d1\u5e03 Claude Code \u8d28\u91cf\u95ee\u9898\u7684\u6280\u672f\u590d\u76d8\uff0c\u786e\u8ba4\u8fd1\u671f\u7528\u6237\u53cd\u9988\u7684\u6a21\u578b\u8868\u73b0\u4e0b\u964d\u6e90\u4e8e\u4e09\u9879\u72ec\u7acb\u53d8\u66f4\uff1a3 \u6708\u521d\u5c06\u9ed8\u8ba4\u63a8\u7406\u52aa\u529b\u7ea7\u522b\u4ece high \u964d\u81f3 medium\uff1b3 \u6708\u5e95\u7684\u7f13\u5b58\u4f18\u5316 bug \u5bfc\u81f4\u4f1a\u8bdd\u95f2\u7f6e\u8d85\u4e00\u5c0f\u65f6\u540e\u6301\u7eed\u4e22\u5931\u5386\u53f2\u63a8\u7406\u8bb0\u5f55\uff1b4 \u6708\u4e2d\u65ec\u4e3a\u964d\u4f4e\u5197\u957f\u5ea6\u800c\u6dfb\u52a0\u7684\u7cfb\u7edf\u63d0\u793a\u8bcd\u610f\u5916\u635f\u5bb3\u4e86\u7f16\u7801\u8d28\u91cf\u3002\u4e09\u9879\u95ee\u9898\u5df2\u5206\u522b\u4e8e 4 \u6708 7 \u65e5\u300110 \u65e5\u300120 \u65e5\u4fee\u590d\u3002Anthropic \u5df2\u91cd\u7f6e\u6240\u6709\u8ba2\u9605\u8005\u7684\u4f7f\u7528\u989d\u5ea6\uff0c\u5e76\u627f\u8bfa\u52a0\u5f3a\u5185\u90e8\u6d4b\u8bd5\u6d41\u7a0b\uff0c\u5305\u62ec\u6269\u5927\u516c\u5f00\u7248\u672c\u7684\u5185\u6d4b\u8986\u76d6\u3001\u6539\u8fdb Code Review \u5de5\u5177\u4ee5\u652f\u6301\u591a\u4ed3\u5e93\u4e0a\u4e0b\u6587\uff0c\u4ee5\u53ca\u5bf9\u7cfb\u7edf\u63d0\u793a\u8bcd\u53d8\u66f4\u5b9e\u65bd\u66f4\u4e25\u683c\u7684\u8bc4\u4f30\u4e0e\u6e10\u8fdb\u5f0f\u53d1\u5e03\u7b56\u7565\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic&#039;s engineering team published a postmortem on recent Claude Code quality issues, tracing user reports of degraded performance to three separate changes: a March 4 default effort level reduction from high to medium; a March 26 caching optimization bug that continuously dropped reasoning history after sessions idled for over an hour; and an April 16 system prompt change to reduce verbosity that inadvertently hurt coding quality. All issues were resolved by April 20. Anthropic reset usage limits for all subscribers and committed to improved testing processes, including broader internal dogfooding of public builds, enhanced Code Review tooling with multi-repository context, and stricter evaluation with gradual rollouts for system prompt changes.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/engineering\/april-23-postmortem\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Scaling Managed Agents: Decoupling the brain from the hands<\/strong>\uff08Anthropic Engineering\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u5de5\u7a0b\u535a\u5ba2\u53d1\u5e03\u6258\u7ba1\u667a\u80fd\u4f53\uff08Managed Agents\uff09\u67b6\u6784\u8bbe\u8ba1\u6df1\u5ea6\u89e3\u6790\uff0c\u6838\u5fc3\u601d\u8def\u662f\u5c06&quot;\u5927\u8111&quot;\uff08Claude \u53ca\u5176 harness\uff09\u4e0e&quot;\u624b&quot;\uff08\u6c99\u7bb1\u6267\u884c\u73af\u5883\uff09\u53ca&quot;\u4f1a\u8bdd&quot;\uff08\u4e8b\u4ef6\u65e5\u5fd7\uff09\u89e3\u8026\u3002\u901a\u8fc7\u5c06 harness \u79fb\u51fa\u5bb9\u5668\u3001\u4f1a\u8bdd\u65e5\u5fd7\u72ec\u7acb\u6301\u4e45\u5316\u5b58\u50a8\uff0c\u7cfb\u7edf\u5b9e\u73b0\u4e86\u5404\u7ec4\u4ef6\u7684\u72ec\u7acb\u6545\u969c\u6062\u590d\u4e0e\u5f39\u6027\u4f38\u7f29\uff0cp50 \u9996 token \u5ef6\u8fdf\u964d\u4f4e\u7ea6 60%\uff0cp95 \u964d\u4f4e\u8d85 90%\u3002\u8be5\u67b6\u6784\u91c7\u7528\u64cd\u4f5c\u7cfb\u7edf\u5f0f\u7684\u865a\u62df\u5316\u62bd\u8c61\uff08session\/harness\/sandbox\uff09\uff0c\u4f7f\u5b9e\u73b0\u53ef\u968f\u6a21\u578b\u80fd\u529b\u6f14\u8fdb\u81ea\u7531\u66ff\u6362\uff0c\u540c\u65f6\u901a\u8fc7\u4ee4\u724c\u9694\u79bb\u4e0e MCP \u4ee3\u7406\u7b49\u673a\u5236\u5f3a\u5316\u5b89\u5168\u8fb9\u754c\uff0c\u652f\u6301\u591a\u5927\u8111\u3001\u591a\u624b\u7684\u7075\u6d3b\u7f16\u6392\uff0c\u4e3a\u957f\u7a0b\u81ea\u4e3b\u4efb\u52a1\u63d0\u4f9b\u53ef\u9760\u7684\u57fa\u7840\u8bbe\u65bd\u5e95\u5ea7\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic&#039;s engineering blog published a deep dive into Managed Agents architecture, centering on decoupling the &quot;brain&quot; (Claude and its harness) from the &quot;hands&quot; (sandbox execution environments) and &quot;session&quot; (event logs). By moving the harness out of containers and durably storing session logs independently, the system enables independent failure recovery and elastic scaling, achieving roughly 60% p50 and over 90% p95 time-to-first-token improvements. The design adopts OS-style virtualization abstractions (session\/harness\/sandbox) allowing implementations to evolve freely as models improve, while strengthening security boundaries through token isolation and MCP proxies. This architecture supports flexible orchestration of multiple brains and hands, providing reliable infrastructure for long-horizon autonomous tasks.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/engineering\/managed-agents\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Sources: Anthropic potential $900B+ valuation round could happen within two weeks<\/strong>\uff08TechCrunch AI\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>\u636e TechCrunch \u63f4\u5f15\u77e5\u60c5\u4eba\u58eb\u6d88\u606f\uff0cAnthropic \u6b63\u8fdb\u884c\u65b0\u4e00\u8f6e\u7ea6 500 \u4ebf\u7f8e\u5143\u878d\u8d44\uff0c\u8981\u6c42\u6295\u8d44\u8005\u5728 48 \u5c0f\u65f6\u5185\u63d0\u4ea4\u8ba4\u8d2d\u989d\u5ea6\uff0c\u9884\u8ba1\u4e24\u5468\u5185\u5b8c\u6210\u4ea4\u5272\u3002\u672c\u8f6e\u4f30\u503c\u76ee\u6807\u7ea6 9000 \u4ebf\u7f8e\u5143\uff0c\u56e0\u6295\u8d44\u8005\u9700\u6c42\u65fa\u76db\u6700\u7ec8\u4f30\u503c\u53ef\u80fd\u66f4\u9ad8\u3002\u82e5\u8fbe\u6210\uff0cAnthropic \u4f30\u503c\u5c06\u8f83 2 \u6708\u4e0a\u4e00\u8f6e\u7684 3800 \u4ebf\u7f8e\u5143\u7ffb\u500d\u6709\u4f59\uff0c\u5e76\u8d85\u8fc7 OpenAI \u5e74\u521d\u521b\u4e0b\u7684 8520 \u4ebf\u7f8e\u5143\u7eaa\u5f55\u3002\u90e8\u5206 2024 \u5e74\u53ca\u66f4\u65e9\u7684\u65e9\u671f\u6295\u8d44\u8005\u9009\u62e9\u8df3\u8fc7\u672c\u8f6e\uff0c\u62df\u5728\u516c\u53f8\u9884\u671f\u7684\u5e74\u5185 IPO \u65f6\u5957\u73b0\u3002Anthropic \u5e74\u5316\u6536\u5165\u8fd0\u884c\u7387\u5df2\u7a81\u7834 300 \u4ebf\u7f8e\u5143\uff0c\u5b9e\u9645\u63a5\u8fd1 400 \u4ebf\u7f8e\u5143\u3002<\/p>\n<p><strong>English Summary:<\/strong> According to TechCrunch sources, Anthropic is conducting a new funding round of roughly $50 billion, asking investors to submit allocations within 48 hours with an expected close within two weeks. The targeted valuation is approximately $900 billion, potentially higher given strong investor demand. If achieved, this would more than double Anthropic&#039;s February valuation of $380 billion and surpass OpenAI&#039;s record $852 billion post-money valuation from earlier this year. Some early investors from 2024 or earlier are skipping this round to potentially cash out during Anthropic&#039;s anticipated IPO later this year. The company&#039;s annual revenue run rate has exceeded $30 billion, with sources indicating it is closer to $40 billion.<\/p>\n<p><a href=\"https:\/\/techcrunch.com\/2026\/04\/30\/anthropic-potential-900b-valuation-round-could-happen-within-two-weeks\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>NVIDIA Launches Ising Open Models for Quantum Computing<\/strong>\uff08InfoQ AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>NVIDIA \u53d1\u5e03\u5f00\u6e90\u6a21\u578b\u5bb6\u65cf NVIDIA Ising\uff0c\u4e13\u6ce8\u4e8e\u91cf\u5b50\u5904\u7406\u5668\u6821\u51c6\u4e0e\u91cf\u5b50\u7ea0\u9519\u4e24\u5927\u6838\u5fc3\u5de5\u7a0b\u6311\u6218\u3002\u8be5\u5bb6\u65cf\u5305\u542b\u89c6\u89c9-\u8bed\u8a00\u6821\u51c6\u6a21\u578b\uff08\u5b9e\u65f6\u89e3\u6790\u6d4b\u91cf\u6570\u636e\u5e76\u8c03\u6574\u786c\u4ef6\u53c2\u6570\uff09\u548c\u57fa\u4e8e 3D \u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u7684\u89e3\u7801\u6a21\u578b\uff08\u9488\u5bf9\u5ef6\u8fdf\u6216\u7cbe\u5ea6\u4f18\u5316\uff09\u3002Ising \u6a21\u578b\u4ee5\u5f00\u6e90\u5f62\u5f0f\u53d1\u5e03\uff0c\u652f\u6301\u672c\u5730\u90e8\u7f72\u4e0e\u786c\u4ef6\u9002\u914d\uff0c\u914d\u5957\u63d0\u4f9b\u6570\u636e\u96c6\u3001\u5de5\u4f5c\u6d41\u793a\u4f8b\u53ca NIM \u5fae\u670d\u52a1\uff0c\u53ef\u4e0e CUDA-Q \u548c NVQLink \u96c6\u6210\u5b9e\u73b0\u6df7\u5408\u91cf\u5b50-\u7ecf\u5178\u7f16\u7a0b\u3002\u76f8\u6bd4 IBM\u3001Google \u7b49\u5382\u5546\u7684\u4e13\u6709\u65b9\u6848\uff0cIsing \u5b9a\u4f4d\u4e3a\u786c\u4ef6\u65e0\u5173\u7684\u5f00\u653e\u6a21\u578b\u5c42\uff0c\u793e\u533a\u5bf9\u5176\u964d\u4f4e\u91cf\u5b50\u8bbe\u5907\u8fd0\u7ef4\u5f00\u9500\u7684\u6f5c\u529b\u8868\u793a\u5173\u6ce8\uff0c\u540c\u65f6\u4e5f\u5728\u8ba8\u8bba\u6a21\u578b\u8de8\u67b6\u6784\u6cdb\u5316\u80fd\u529b\u53ca\u5b9e\u65f6\u7ea0\u9519\u5ef6\u8fdf\u7b49\u6280\u672f\u6311\u6218\u3002<\/p>\n<p><strong>English Summary:<\/strong> NVIDIA announced NVIDIA Ising, an open-source model family targeting quantum processor calibration and quantum error correction. The family includes a vision-language calibration model for real-time interpretation of measurement data and hardware parameter adjustment, plus 3D CNN-based decoding models optimized for either latency or accuracy. Released as open source with supporting datasets, workflow examples, and NIM microservices, Ising integrates with CUDA-Q and NVQLink for hybrid quantum-classical programming. Positioned as a hardware-agnostic open model layer compared to proprietary approaches from IBM and Google, the release has drawn community interest in reducing quantum device operational overhead, alongside discussions on cross-architecture generalization and real-time error correction latency challenges.<\/p>\n<p><a href=\"https:\/\/www.infoq.com\/news\/2026\/04\/nvidia-ising-quantum\/?utm_campaign=infoq_content&#038;utm_source=infoq&#038;utm_medium=feed&#038;utm_term=AI%2C+ML+%26+Data+Engineering\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Reinforcement fine-tuning with LLM-as-a-judge<\/strong>\uff08AWS ML Blog\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>AWS ML Blog \u6df1\u5165\u63a2\u8ba8\u4e86\u57fa\u4e8e LLM-as-a-judge \u7684\u5f3a\u5316\u5fae\u8c03\uff08RLAIF\uff09\u6280\u672f\uff0c\u5e76\u5c55\u793a\u4e86\u5982\u4f55\u5c06\u5176\u5e94\u7528\u4e8e Amazon Nova \u6a21\u578b\u3002\u6587\u7ae0\u5bf9\u6bd4\u4e86 RLAIF \u4e0e\u4f20\u7edf RLVR\uff08\u53ef\u9a8c\u8bc1\u5956\u52b1\u5f3a\u5316\u5b66\u4e60\uff09\u7684\u5dee\u5f02\uff0c\u6307\u51fa LLM \u8bc4\u5224\u5668\u80fd\u591f\u4ece\u6b63\u786e\u6027\u3001\u8bed\u6c14\u3001\u5b89\u5168\u6027\u548c\u76f8\u5173\u6027\u7b49\u591a\u7ef4\u5ea6\u63d0\u4f9b\u4e0a\u4e0b\u6587\u611f\u77e5\u7684\u53cd\u9988\uff0c\u5e76\u9644\u5e26\u53ef\u89e3\u91ca\u7684\u7406\u7531\u3002\u6587\u4e2d\u8fd8\u8be6\u7ec6\u4ecb\u7ecd\u4e86\u5b9e\u65bd LLM-as-a-judge \u7684\u516d\u4e2a\u5173\u952e\u6b65\u9aa4\uff0c\u5305\u62ec\u8bc4\u5224\u67b6\u6784\u9009\u62e9\u3001\u8bc4\u5206\u6807\u51c6\u8bbe\u8ba1\u3001\u53c2\u8003\u6837\u672c\u6784\u5efa\u3001\u8f93\u51fa\u683c\u5f0f\u5b9a\u4e49\u3001\u63a8\u7406\u53c2\u6570\u8c03\u4f18\u4ee5\u53ca\u9488\u5bf9\u8fb9\u7f18\u6848\u4f8b\u7684\u8fed\u4ee3\u4f18\u5316\u3002<\/p>\n<p><strong>English Summary:<\/strong> AWS ML Blog explores Reinforcement Learning from AI Feedback (RLAIF) using LLM-as-a-judge with Amazon Nova models. The post compares RLAIF against RLVR, highlighting that LLM judges provide multi-dimensional, context-aware feedback with explainable rationales. It outlines six critical implementation steps: selecting judge architecture, designing rubrics, building reference sets, defining output formats, tuning inference parameters, and iterating on edge cases.<\/p>\n<p><a href=\"https:\/\/aws.amazon.com\/blogs\/machine-learning\/reinforcement-fine-tuning-with-llm-as-a-judge\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>GitHub Copilot CLI for Beginners: Interactive v. non-interactive mode<\/strong>\uff08GitHub AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>GitHub \u5b98\u65b9\u535a\u5ba2\u53d1\u5e03 GitHub Copilot CLI \u521d\u5b66\u8005\u7cfb\u5217\u6559\u7a0b\uff0c\u8be6\u89e3\u4ea4\u4e92\u5f0f\u4e0e\u975e\u4ea4\u4e92\u5f0f\u4e24\u79cd\u6a21\u5f0f\u7684\u4f7f\u7528\u573a\u666f\u4e0e\u64cd\u4f5c\u65b9\u6cd5\u3002\u4ea4\u4e92\u5f0f\u6a21\u5f0f\u63d0\u4f9b\u7c7b\u4f3c\u804a\u5929\u7684\u591a\u8f6e\u5bf9\u8bdd\u4f53\u9a8c\uff0c\u9002\u5408\u63a2\u7d22\u6027\u3001\u9700\u8981\u8fed\u4ee3\u534f\u4f5c\u7684\u590d\u6742\u4efb\u52a1\uff1b\u975e\u4ea4\u4e92\u5f0f\u6a21\u5f0f\u5219\u901a\u8fc7 `copilot -p` \u547d\u4ee4\u5b9e\u73b0\u5feb\u901f\u5355\u6b21\u95ee\u7b54\uff0c\u9002\u7528\u4e8e\u4ee3\u7801\u7247\u6bb5\u751f\u6210\u3001\u4ed3\u5e93\u6458\u8981\u6216\u81ea\u52a8\u5316\u5de5\u4f5c\u6d41\u96c6\u6210\u3002\u6587\u7ae0\u8fd8\u4ecb\u7ecd\u4e86\u5982\u4f55\u901a\u8fc7 `\/r` \u547d\u4ee4\u6062\u590d\u4e4b\u524d\u7684\u4f1a\u8bdd\u4e0a\u4e0b\u6587\uff0c\u5e2e\u52a9\u7528\u6237\u6839\u636e\u5de5\u4f5c\u9700\u6c42\u7075\u6d3b\u9009\u62e9\u5408\u9002\u6a21\u5f0f\u3002<\/p>\n<p><strong>English Summary:<\/strong> GitHub&#039;s blog introduces interactive and non-interactive modes for GitHub Copilot CLI beginners. Interactive mode offers a chat-like, multi-turn experience ideal for exploratory tasks, while non-interactive mode via `copilot -p` provides quick one-off answers for code snippets, repo summaries, or automation workflows. The post also covers session resumption with `\/r` to maintain context across conversations.<\/p>\n<p><a href=\"https:\/\/github.blog\/ai-and-ml\/github-copilot\/github-copilot-cli-for-beginners-interactive-v-non-interactive-mode\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>[AINews] The Inference Inflection<\/strong>\uff08Latent Space\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Latent Space \u64ad\u5ba2\u6587\u7ae0\u805a\u7126 AI \u884c\u4e1a\u6b63\u5728\u7ecf\u5386\u7684&quot;\u63a8\u7406\u62d0\u70b9&quot;\uff08Inference Inflection\uff09\u3002\u968f\u7740 GPT-5.5 \u7b49\u6a21\u578b\u7684\u6210\u529f\u53d1\u5e03\uff0cSam Altman \u516c\u5f00\u8868\u793a OpenAI \u5fc5\u987b\u8f6c\u578b\u4e3a AI \u63a8\u7406\u516c\u53f8\uff0cNoam Brown \u4e5f\u5f3a\u8c03\u63a8\u7406\u8ba1\u7b97\u662f\u6218\u7565\u8d44\u6e90\u3002\u6587\u7ae0\u5f15\u7528 Intel CEO \u8d22\u62a5\u6570\u636e\u6307\u51fa\uff0c\u9664 GPU \u5916\uff0cCPU \u8ba1\u7b97\u9700\u6c42\u6b63\u56e0 RL \u8bad\u7ec3\u6c99\u76d2\u3001\u751f\u4ea7\u7ea7 Agent \u548c\u4ee3\u7801\u6267\u884c\u7b49\u573a\u666f\u800c\u6fc0\u589e\uff0c\u53e0\u52a0\u4f01\u4e1a COVID \u65f6\u671f\u670d\u52a1\u5668\u7684\u81ea\u7136\u66f4\u65b0\u5468\u671f\uff0c\u53ef\u80fd\u5f15\u53d1 CPU \u77ed\u7f3a\u3002NVIDIA CEO \u9ec4\u4ec1\u52cb\u5728 GTC \u6f14\u8bb2\u4e2d\u8868\u793a\uff0cAI \u5de5\u4f5c\u8d1f\u8f7d\u7684\u8ba1\u7b97\u9700\u6c42\u5728\u8fc7\u53bb\u4e24\u5e74\u589e\u957f\u4e86\u7ea6 10,000 \u500d\u3002<\/p>\n<p><strong>English Summary:<\/strong> Latent Space discusses the &quot;Inference Inflection&quot; in AI, highlighting industry leaders&#039; statements on the strategic importance of inference compute. Sam Altman notes OpenAI must become an inference company, while Intel&#039;s CEO cites rising CPU demand from RL training sandboxes and production agents. NVIDIA&#039;s Jensen Huang stated at GTC that AI compute demand has increased roughly 10,000x in two years as AI shifts from training to thinking and doing.<\/p>\n<p><a href=\"https:\/\/www.latent.space\/p\/ainews-the-inference-inflection\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Introducing Advanced Account Security<\/strong>\uff08OpenAI News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>OpenAI \u63a8\u51fa\u9762\u5411\u9ad8\u98ce\u9669\u7528\u6237\u7684&quot;\u9ad8\u7ea7\u8d26\u6237\u5b89\u5168&quot;\u529f\u80fd\uff0c\u63d0\u4f9b\u9632\u9493\u9c7c\u767b\u5f55\u3001\u5f3a\u5316\u8d26\u6237\u6062\u590d\u548c\u589e\u5f3a\u4fdd\u62a4\u7684\u4e00\u7ad9\u5f0f\u5b89\u5168\u8bbe\u7f6e\u3002\u8be5\u529f\u80fd\u8981\u6c42\u4f7f\u7528\u901a\u884c\u5bc6\u94a5\u6216\u7269\u7406\u5b89\u5168\u5bc6\u94a5\uff08\u5982 YubiKey\uff09\u66ff\u4ee3\u5bc6\u7801\u767b\u5f55\uff0c\u7981\u7528\u90ae\u4ef6\u548c\u77ed\u4fe1\u6062\u590d\u65b9\u5f0f\uff0c\u6539\u4e3a\u5907\u4efd\u5bc6\u94a5\u548c\u6062\u590d\u5bc6\u94a5\uff1b\u540c\u65f6\u7f29\u77ed\u4f1a\u8bdd\u6709\u6548\u671f\u3001\u63d0\u4f9b\u767b\u5f55\u63d0\u9192\u548c\u4f1a\u8bdd\u7ba1\u7406\u529f\u80fd\uff0c\u5e76\u81ea\u52a8\u6392\u9664\u5bf9\u8bdd\u6570\u636e\u7528\u4e8e\u6a21\u578b\u8bad\u7ec3\u3002OpenAI \u4e0e Yubico \u5408\u4f5c\u63d0\u4f9b\u4f18\u60e0\u7684\u5b89\u5168\u5bc6\u94a5\u5957\u88c5\uff0c\u8be5\u529f\u80fd\u9002\u7528\u4e8e ChatGPT \u548c Codex \u8d26\u6237\u3002<\/p>\n<p><strong>English Summary:<\/strong> OpenAI introduces Advanced Account Security, an opt-in setting for high-risk users featuring phishing-resistant login via passkeys or security keys, stronger recovery methods replacing email\/SMS, shorter session durations with login alerts, and automatic exclusion from model training. The company partnered with Yubico for discounted security key bundles. The protections apply to both ChatGPT and Codex accounts.<\/p>\n<p><a href=\"https:\/\/openai.com\/index\/advanced-account-security\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Where the goblins came from<\/strong>\uff08OpenAI News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>OpenAI \u5b98\u65b9\u535a\u5ba2\u63ed\u79d8 GPT-5 \u7cfb\u5217\u6a21\u578b\u4e2d&quot;\u54e5\u5e03\u6797&quot;\uff08goblin\uff09\u7b49\u5947\u5e7b\u751f\u7269\u9690\u55bb\u9891\u7e41\u51fa\u73b0\u7684\u73b0\u8c61\u3002\u8c03\u67e5\u663e\u793a\uff0c\u8be5\u884c\u4e3a\u6e90\u4e8e\u4e3a&quot;Nerdy&quot;\u4e2a\u6027\u5b9a\u5236\u529f\u80fd\u8fdb\u884c\u5f3a\u5316\u5b66\u4e60\u8bad\u7ec3\u65f6\uff0c\u7cfb\u7edf\u65e0\u610f\u4e2d\u7ed9\u4e88\u5305\u542b\u751f\u7269\u9690\u55bb\u7684\u8f93\u51fa\u8fc7\u9ad8\u5956\u52b1\u3002\u6570\u636e\u663e\u793a\uff0c\u9009\u62e9 Nerdy \u4e2a\u6027\u7684\u7528\u6237\u4ec5\u5360 2.5%\uff0c\u5374\u4ea7\u751f\u4e86 66.7% \u7684\u54e5\u5e03\u6797\u63d0\u53ca\u3002\u8fd9\u4e00\u98ce\u683c\u968f\u540e\u6269\u6563\u81f3\u5176\u4ed6\u573a\u666f\uff0c\u5bfc\u81f4 GPT-5.4 \u4e2d\u8be5\u73b0\u8c61\u663e\u8457\u52a0\u5267\u3002OpenAI \u5df2\u8bc6\u522b\u5e76\u4fee\u590d\u6b64\u95ee\u9898\uff0c\u6587\u7ae0\u5c55\u793a\u4e86\u4ece\u7528\u6237\u53cd\u9988\u3001\u6570\u636e\u8ffd\u8e2a\u5230\u6839\u56e0\u5206\u6790\u548c\u4fee\u590d\u7684\u5b8c\u6574\u6392\u67e5\u8fc7\u7a0b\u3002<\/p>\n<p><strong>English Summary:<\/strong> OpenAI explains the origin of &quot;goblin&quot; and creature metaphors in GPT-5 models. The behavior stemmed from RL training for the &quot;Nerdy&quot; personality feature, which inadvertently rewarded creature-word outputs. Though only 2.5% of users selected Nerdy, it generated 66.7% of goblin mentions. The quirk spread to other contexts, intensifying in GPT-5.4. OpenAI has since identified and fixed the root cause, detailing their investigation from user reports to resolution.<\/p>\n<p><a href=\"https:\/\/openai.com\/index\/where-the-goblins-came-from\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>[AINews] not much happened today<\/strong>\uff08Latent Space\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>AINews \u4eca\u65e5\u7b80\u62a5\u627f\u8ba4\u5f53\u5929 AI \u9886\u57df\u76f8\u5bf9\u5e73\u9759\uff0c\u4f46\u63d0\u53ca\u4e86\u82e5\u5e72\u503c\u5f97\u5173\u6ce8\u7684\u6a21\u578b\u53d1\u5e03\uff1aNVIDIA \u63a8\u51fa 30B \u53c2\u6570\u7684\u591a\u6a21\u6001 MoE \u6a21\u578b Nemotron 3 Nano Omni\uff0c\u652f\u6301 256K \u4e0a\u4e0b\u6587\u548c\u6587\u672c\u3001\u56fe\u50cf\u3001\u89c6\u9891\u3001\u97f3\u9891\u3001\u6587\u6863\u5904\u7406\uff1bPoolside \u9996\u6b21\u516c\u5f00\u53d1\u5e03 Laguna XS.2\uff0833B\/3B MoE\uff09\u548c Laguna M.1 \u7f16\u7a0b\u6a21\u578b\uff0c\u91c7\u7528 Apache 2.0 \u8bb8\u53ef\u8bc1\uff1b\u5fae\u8f6f\u5f00\u6e90 TRELLIS.2 \u56fe\u50cf\u8f6c 3D \u6a21\u578b\uff1bvLLM v0.20 \u53d1\u5e03\uff0c\u5e26\u6765 TurboQuant 2-bit KV \u7f13\u5b58\u3001vLLM IR \u65b0\u57fa\u7840\u67b6\u6784\u548c\u5bf9 DeepSeek V4 \u7684\u652f\u6301\uff1bMistral \u63a8\u51fa Workflows \u7f16\u6392\u5c42\u9884\u89c8\uff1b\u4ee5\u53ca\u793e\u533a\u5bf9 GPT-6 \u7684\u671f\u5f85\u5f00\u59cb\u5347\u6e29\u3002<\/p>\n<p><strong>English Summary:<\/strong> AINews daily brief notes a quiet day in AI but highlights notable model releases: NVIDIA&#039;s Nemotron 3 Nano Omni (30B\/A3B multimodal MoE with 256K context), Poolside&#039;s first public release of Laguna XS.2 and M.1 coding models under Apache 2.0, Microsoft&#039;s TRELLIS.2 open-source image-to-3D model, vLLM v0.20 with TurboQuant 2-bit KV cache and DeepSeek V4 support, Mistral&#039;s Workflows orchestration preview, and growing GPT-6 hype.<\/p>\n<p><a href=\"https:\/\/www.latent.space\/p\/ainews-not-much-happened-today\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Reading today&#039;s open-closed performance gap<\/strong>\uff08Interconnects\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>\u6587\u7ae0\u6df1\u5165\u63a2\u8ba8\u4e86\u5f00\u6e90\u4e0e\u95ed\u6e90\u6a21\u578b\u4e4b\u95f4\u7684\u6027\u80fd\u5dee\u8ddd\u8bc4\u4f30\u95ee\u9898\uff0c\u6307\u51fa\u5c06\u8fd9\u4e00\u590d\u6742\u52a8\u6001\u7b80\u5316\u4e3a\u5355\u4e00\u6570\u5b57\u4f1a\u63a9\u76d6\u5173\u952e\u7ec6\u8282\u3002\u4f5c\u8005\u5206\u6790\u4e86\u57fa\u51c6\u6d4b\u8bd5\u968f\u65f6\u95f4\u6f14\u53d8\u3001\u6a21\u578b\u771f\u5b9e\u4e16\u754c\u8868\u73b0\u4e0e\u6392\u540d\u4e4b\u95f4\u7684\u5173\u7cfb\uff0c\u4ee5\u53ca\u8bad\u7ec3\u65b9\u6cd5\u7684\u53d8\u5316\u5982\u4f55\u5f71\u54cd\u8bc4\u4f30\u7ed3\u679c\u3002\u6587\u7ae0\u5f3a\u8c03\uff0c\u5f53\u524d\u524d\u6cbf\u5b9e\u9a8c\u5ba4\u5728\u7f16\u7a0b\u548c\u7ec8\u7aef\u4efb\u52a1\u4e0a\u6295\u5165\u5de8\u8d44\uff0c\u800c\u5f00\u6e90\u6a21\u578b\uff08\u5c24\u5176\u662f\u4e2d\u56fd\u5b9e\u9a8c\u5ba4\uff09\u5728\u8ffd\u8d76\u8fc7\u7a0b\u4e2d\u9762\u4e34 RL \u73af\u5883\u6784\u5efa\u7b49\u6311\u6218\u3002\u4f5c\u8005\u8ba4\u4e3a\uff0c\u968f\u7740\u4efb\u52a1\u96be\u5ea6\u589e\u52a0\u548c\u6240\u9700\u6570\u636e\u53d8\u5f97\u66f4\u52a0\u4e13\u6709\uff0c\u5f00\u6e90\u6a21\u578b\u7ef4\u6301\u7ade\u4e89\u529b\u7684\u96be\u5ea6\u5c06\u52a0\u5927\uff0c\u4f46\u57fa\u51c6\u6d4b\u8bd5\u5e76\u4e0d\u80fd\u5b8c\u5168\u53cd\u6620\u771f\u5b9e\u80fd\u529b\u5dee\u8ddd\u3002<\/p>\n<p><strong>English Summary:<\/strong> The article examines the nuanced dynamics behind open-vs-closed model performance gaps, arguing that reducing this complex relationship to a single number obscures crucial factors. It analyzes how benchmarks evolve over time, the correlation between benchmark rankings and real-world performance, and how training regimes shift across paradigms.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/reading-todays-open-closed-performance\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Building an emoji list generator with the GitHub Copilot CLI<\/strong>\uff08GitHub AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>GitHub \u535a\u5ba2\u4ecb\u7ecd\u4e86\u5982\u4f55\u4f7f\u7528 GitHub Copilot CLI \u6784\u5efa\u4e00\u4e2a\u8868\u60c5\u7b26\u53f7\u5217\u8868\u751f\u6210\u5668\u3002\u8be5\u9879\u76ee\u5728 Rubber Duck Thursday \u76f4\u64ad\u6d3b\u52a8\u4e2d\u5f00\u53d1\uff0c\u4f7f\u7528 @opentui\/core \u6784\u5efa\u7ec8\u7aef\u754c\u9762\u3001@github\/copilot-sdk \u63d0\u4f9b AI \u80fd\u529b\u3001clipboardy \u5b9e\u73b0\u526a\u8d34\u677f\u529f\u80fd\u3002\u7528\u6237\u53ef\u4ee5\u5728\u7ec8\u7aef\u7c98\u8d34\u6216\u8f93\u5165\u5217\u8868\uff0c\u6309 Ctrl+S \u540e\u81ea\u52a8\u751f\u6210\u5e26\u76f8\u5173\u8868\u60c5\u7b26\u53f7\u7684\u5217\u8868\u5e76\u590d\u5236\u5230\u526a\u8d34\u677f\u3002\u5f00\u53d1\u8fc7\u7a0b\u5c55\u793a\u4e86 Copilot CLI \u7684 Plan \u6a21\u5f0f\u3001Autopilot \u6a21\u5f0f\u3001\u591a\u6a21\u578b\u5de5\u4f5c\u6d41\uff08Claude Sonnet 4.6 \u548c Opus 4.7\uff09\u3001allow-all \u5de5\u5177\u6807\u5fd7\u4ee5\u53ca GitHub MCP \u670d\u52a1\u5668\u7b49\u529f\u80fd\u3002<\/p>\n<p><strong>English Summary:<\/strong> GitHub Blog demonstrates building an emoji list generator using GitHub Copilot CLI during the Rubber Duck Thursday stream. The project uses @opentui\/core for terminal UI, @github\/copilot-sdk for AI, and clipboardy for clipboard access. Users paste or type bullet points in the terminal, press Ctrl+S, and receive a list with relevant emojis copied to clipboard. The development showcased Copilot CLI features including Plan mode, Autopilot mode, multi-model workflow (Claude Sonnet 4.6 and Opus 4.7), allow-all tools flag, and GitHub MCP server.<\/p>\n<p><a href=\"https:\/\/github.blog\/ai-and-ml\/github-copilot\/building-an-emoji-list-generator-with-the-github-copilot-cli\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Ollama is now powered by MLX on Apple Silicon in preview<\/strong>\uff08Ollama Blog\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Ollama \u53d1\u5e03\u9884\u89c8\u7248\uff0c\u5728 Apple Silicon \u4e0a\u96c6\u6210 MLX \u6846\u67b6\uff0c\u5b9e\u73b0\u6027\u80fd\u5927\u5e45\u63d0\u5347\u3002\u65b0\u7248\u672c\u5229\u7528 Apple \u7684\u7edf\u4e00\u5185\u5b58\u67b6\u6784\uff0c\u5728 M5\u3001M5 Pro \u548c M5 Max \u82af\u7247\u4e0a\u901a\u8fc7 GPU Neural Accelerators \u52a0\u901f\u9996 token \u65f6\u95f4\u548c\u751f\u6210\u901f\u5ea6\u3002\u6d4b\u8bd5\u663e\u793a Qwen3.5-35B-A3B \u6a21\u578b\u5728 NVFP4 \u91cf\u5316\u4e0b\u53ef\u8fbe 1851 token\/s \u7684 prefill \u901f\u5ea6\u548c 134 token\/s \u7684 decode \u901f\u5ea6\u3002Ollama \u65b0\u589e NVFP4 \u652f\u6301\u4ee5\u4fdd\u6301\u4e0e\u751f\u4ea7\u73af\u5883\u7684\u7ed3\u679c\u4e00\u81f4\u6027\uff0c\u5e76\u6539\u8fdb\u4e86\u7f13\u5b58\u673a\u5236\uff0c\u5305\u62ec\u8de8\u5bf9\u8bdd\u7f13\u5b58\u590d\u7528\u3001\u667a\u80fd\u68c0\u67e5\u70b9\u548c\u66f4\u667a\u80fd\u7684\u6dd8\u6c70\u7b56\u7565\uff0c\u4f7f\u7f16\u7a0b\u548c\u4ee3\u7406\u4efb\u52a1\u66f4\u9ad8\u6548\u3002<\/p>\n<p><strong>English Summary:<\/strong> Ollama releases a preview powered by Apple&#039;s MLX framework on Apple Silicon, delivering significant performance improvements. The new version leverages unified memory architecture and GPU Neural Accelerators on M5 series chips for faster time-to-first-token and generation speeds. Benchmarks show Qwen3.5-35B-A3B with NVFP4 quantization achieving 1851 token\/s prefill and 134 token\/s decode.<\/p>\n<p><a href=\"https:\/\/ollama.com\/blog\/mlx\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Introducing Claude Design by Anthropic Labs Today, we\u2019re launching Claude Design, a new Anthropic Labs product&#8230;<\/strong>\uff08Anthropic News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic Labs \u63a8\u51fa Claude Design\uff0c\u4e00\u6b3e\u57fa\u4e8e Claude Opus 4.7 \u7684\u89c6\u89c9\u8bbe\u8ba1\u534f\u4f5c\u5de5\u5177\uff0c\u9762\u5411 Pro\u3001Max\u3001Team \u548c Enterprise \u8ba2\u9605\u8005\u5f00\u653e\u7814\u7a76\u9884\u89c8\u3002\u7528\u6237\u53ef\u901a\u8fc7\u81ea\u7136\u8bed\u8a00\u63cf\u8ff0\u9700\u6c42\uff0cClaude \u751f\u6210\u8bbe\u8ba1\u521d\u7a3f\uff0c\u968f\u540e\u901a\u8fc7\u5bf9\u8bdd\u3001\u5185\u8054\u8bc4\u8bba\u3001\u76f4\u63a5\u7f16\u8f91\u6216\u81ea\u5b9a\u4e49\u6ed1\u5757\u8fdb\u884c\u8fed\u4ee3\u4f18\u5316\u3002\u4ea7\u54c1\u652f\u6301\u4ece\u6587\u672c\u3001\u56fe\u7247\u3001\u6587\u6863\u5bfc\u5165\uff0c\u53ef\u81ea\u52a8\u5e94\u7528\u56e2\u961f\u8bbe\u8ba1\u7cfb\u7edf\uff0c\u5b9e\u73b0\u7ec4\u7ec7\u5185\u5171\u4eab\u534f\u4f5c\uff0c\u5e76\u652f\u6301\u5bfc\u51fa\u4e3a PPTX\u3001PDF\u3001HTML \u6216\u540c\u6b65\u81f3 Canva\u3002Claude Design \u8fd8\u53ef\u4e0e Claude Code \u65e0\u7f1d\u8854\u63a5\uff0c\u5c06\u8bbe\u8ba1\u76f4\u63a5\u79fb\u4ea4\u5f00\u53d1\u3002\u65e9\u671f\u7528\u6237\u5305\u62ec Canva\u3001Brilliant \u548c Datadog \u7b49\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic Labs launches Claude Design, a visual design collaboration tool powered by Claude Opus 4.7, available in research preview for Pro, Max, Team, and Enterprise subscribers. Users describe their needs in natural language for Claude to generate initial designs, then iterate through conversation, inline comments, direct edits, or custom sliders. The product supports importing from text, images, and documents, automatically applies team design systems, enables organization-scoped sharing, and exports to PPTX, PDF, HTML, or Canva. Claude Design seamlessly hands off to Claude Code for development. Early adopters include Canva, Brilliant, and Datadog.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/news\/claude-design-anthropic-labs\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>\u65e5\u671f\uff1a2026-05-01 \u672c\u671f\u805a\u7126\uff1a\u91cd\u70b9\u5173\u6ce8\u6a21\u578b\u53d1\u5e03\u4e0e release notes\u3001\u5b98\u65b9 engineeri [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[],"class_list":["post-369","post","type-post","status-publish","format-standard","hentry","category-ai-daily"],"_links":{"self":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts\/369","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=369"}],"version-history":[{"count":0,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts\/369\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=369"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=369"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=369"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}