{"id":367,"date":"2026-04-30T07:28:30","date_gmt":"2026-04-29T23:28:30","guid":{"rendered":"http:\/\/www.faiyi.com\/?p=367"},"modified":"2026-04-30T07:28:30","modified_gmt":"2026-04-29T23:28:30","slug":"ai%e5%8a%a8%e6%80%81%e6%af%8f%e6%97%a5%e7%ae%80%e6%8a%a5-2026-04-30-2","status":"publish","type":"post","link":"http:\/\/www.faiyi.com\/?p=367","title":{"rendered":"AI\u52a8\u6001\u6bcf\u65e5\u7b80\u62a5 2026-04-30"},"content":{"rendered":"<p>\u65e5\u671f\uff1a2026-04-30<\/p>\n<p>\u672c\u671f\u805a\u7126\uff1a\u91cd\u70b9\u5173\u6ce8\u6a21\u578b\u53d1\u5e03\u4e0e release notes\u3001\u5b98\u65b9 engineering blog\u3001AI coding \/ agent \/ SRE\u3001\u8bc4\u6d4b\u699c\u5355\u53d8\u5316\u3001\u5f00\u53d1\u8005\u5b9e\u8df5\u535a\u5ba2\u3001\u6846\u67b6\u751f\u6001\u3001\u5f00\u6e90\u6a21\u578b\u4e0e\u771f\u5b9e\u7528\u6237\u89c6\u89d2\uff1b\u5f53 HN\u3001Reddit\u3001Hugging Face \u7b49\u793e\u533a\u6e90\u53ef\u8bbf\u95ee\u65f6\u4f18\u5148\u7eb3\u5165\u3002<\/p>\n<hr \/>\n<ol>\n<li>\n<p><strong>Artificial Analysis \u6700\u65b0\u6a21\u578b\u6392\u540d\u89c2\u5bdf<\/strong>\uff08Artificial Analysis\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Artificial Analysis \u6700\u65b0\u6a21\u578b\u6392\u540d\u663e\u793a\uff0cGPT-5.5 (xhigh) \u4ee5 60 \u5206\u7684\u667a\u80fd\u6307\u6570\u4f4d\u5c45\u699c\u9996\uff0cGPT-5.5 (high) \u4ee5 59 \u5206\u7d27\u968f\u5176\u540e\u3002Claude Opus 4.7 (Max Effort) \u4e0e Gemini 3.1 Pro Preview\u3001GPT-5.4 (xhigh) \u5e76\u5217\u7b2c\u4e09\uff0c\u5747\u83b7\u5f97 57 \u5206\u3002\u5f00\u6e90\u6a21\u578b\u65b9\u9762\uff0cKimi K2.6 \u4ee5 54 \u5206\u9886\u8dd1\uff0cMiMo-V2.5-Pro \u540c\u5206\u5e76\u5217\uff0cDeepSeek V4 Pro (Reasoning, Max Effort) \u4ee5 52 \u5206\u4f4d\u5217\u7b2c\u4e09\u3002\u901f\u5ea6\u65b9\u9762\uff0cMercury 2 \u4ee5 778.1 tokens\/\u79d2 \u5c45\u9996\uff1b\u6210\u672c\u65b9\u9762\uff0cQwen3.5 0.8B \u4ee5\u6bcf\u767e\u4e07 tokens 0.02 \u7f8e\u5143\u6210\u4e3a\u6700\u7ecf\u6d4e\u9009\u62e9\u3002\u5e73\u53f0\u76ee\u524d\u5171\u8bc4\u4f30 367 \u4e2a\u6a21\u578b\uff0c\u5176\u4e2d 232 \u4e2a\u4e3a\u5f00\u6e90\u6743\u91cd\u6a21\u578b\u3002<\/p>\n<p><strong>English Summary:<\/strong> Artificial Analysis&#039; latest model rankings show GPT-5.5 (xhigh) leading with an Intelligence Index score of 60, followed by GPT-5.5 (high) at 59. Claude Opus 4.7 (Max Effort) ties with Gemini 3.1 Pro Preview and GPT-5.4 (xhigh) at 57 points. Among open weights models, Kimi K2.6 leads with 54 points, tied with MiMo-V2.5-Pro, while DeepSeek V4 Pro (Reasoning, Max Effort) ranks third at 52. Mercury 2 is fastest at 778.1 tokens\/s, and Qwen3.5 0.8B is most affordable at $0.02 per 1M tokens. The platform has evaluated 367 models total, including 232 open weights models.<\/p>\n<p><a href=\"https:\/\/artificialanalysis.ai\/models\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Introducing Claude Opus 4.7<\/strong>\uff08Anthropic News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u6b63\u5f0f\u53d1\u5e03 Claude Opus 4.7\uff0c\u8be5\u6a21\u578b\u5728\u591a\u9879\u4f01\u4e1a\u7ea7\u57fa\u51c6\u6d4b\u8bd5\u4e2d\u8868\u73b0\u663e\u8457\u63d0\u5347\u3002\u5728 Rakuten-SWE-Bench \u4e0a\uff0cOpus 4.7 \u89e3\u51b3\u751f\u4ea7\u4efb\u52a1\u7684\u6570\u91cf\u662f 4.6 \u7248\u672c\u7684\u4e09\u500d\uff0c\u4ee3\u7801\u8d28\u91cf\u4e0e\u6d4b\u8bd5\u8d28\u91cf\u5747\u6709\u4e24\u4f4d\u6570\u63d0\u5347\u3002\u89c6\u89c9\u7406\u89e3\u80fd\u529b\u5927\u5e45\u589e\u5f3a\uff0c\u5728 XBOW \u7684\u89c6\u89c9\u654f\u9510\u5ea6\u57fa\u51c6\u6d4b\u8bd5\u4e2d\u4ece 54.5% \u8dc3\u5347\u81f3 98.5%\u3002\u5728 Databricks \u7684 OfficeQA Pro \u6d4b\u8bd5\u4e2d\uff0c\u6587\u6863\u63a8\u7406\u9519\u8bef\u51cf\u5c11 21%\u3002\u4f01\u4e1a\u7528\u6237\u53cd\u9988\u663e\u793a\uff0c\u8be5\u7248\u672c\u5728\u667a\u80fd\u4f53\u51b3\u7b56\u3001\u5de5\u5177\u8c03\u7528\u51c6\u786e\u6027\u3001\u89d2\u8272\u9075\u5faa\u548c\u590d\u6742\u5de5\u7a0b\u4efb\u52a1\u534f\u8c03\u65b9\u9762\u5747\u6709\u660e\u663e\u6539\u5584\uff0c\u4ee3\u7801\u8f93\u51fa\u4e5f\u66f4\u52a0\u7b80\u6d01\uff0c\u51cf\u5c11\u4e86\u5197\u4f59\u7684\u5305\u88c5\u51fd\u6570\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic officially released Claude Opus 4.7, showing significant improvements across enterprise benchmarks. On Rakuten-SWE-Bench, it resolves 3x more production tasks than Opus 4.6, with double-digit gains in Code Quality and Test Quality. Visual understanding improved dramatically from 54.5% to 98.5% on XBOW&#039;s visual-acuity benchmark. On Databricks&#039; OfficeQA Pro, document reasoning errors decreased by 21%.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/news\/claude-opus-4-7\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Featured An update on recent Claude Code quality reports<\/strong>\uff08Anthropic Engineering\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u5de5\u7a0b\u56e2\u961f\u53d1\u5e03 Claude Code \u8fd1\u671f\u8d28\u91cf\u95ee\u9898\u7684\u590d\u76d8\u62a5\u544a\u30024 \u6708 16 \u65e5\u968f Opus 4.7 \u53d1\u5e03\u65f6\uff0c\u56e2\u961f\u4e3a\u51cf\u5c11\u6a21\u578b\u5197\u957f\u8f93\u51fa\u800c\u6dfb\u52a0\u4e86\u7cfb\u7edf\u63d0\u793a\u8bcd\u957f\u5ea6\u9650\u5236\uff08\u5de5\u5177\u8c03\u7528\u95f4\u6587\u672c \u226425 \u8bcd\uff0c\u6700\u7ec8\u56de\u590d \u2264100 \u8bcd\uff09\uff0c\u8be5\u6539\u52a8\u610f\u5916\u5bfc\u81f4\u6a21\u578b\u667a\u80fd\u4e0b\u964d\u3002\u6b64\u5916\uff0c\u4e00\u9879\u7f13\u5b58\u4f18\u5316\u9519\u8bef\u5730\u4e22\u5f03\u4e86\u5386\u53f2\u63a8\u7406\u5185\u5bb9\uff0c\u5f71\u54cd\u4ee3\u7801\u5ba1\u67e5\u529f\u80fd\uff0c\u5bfc\u81f4\u6a21\u578b\u65e0\u6cd5\u57fa\u4e8e\u5148\u524d\u63a8\u7406\u7ee7\u7eed\u5de5\u4f5c\u3002\u56e2\u961f\u5728\u6536\u5230\u7528\u6237\u53cd\u9988\u540e\uff0c\u4e8e 4 \u6708 7 \u65e5\u5c06 Opus 4.7 \u9ed8\u8ba4 effort \u7ea7\u522b\u6062\u590d\u4e3a xhigh\uff0c\u5e76\u4e8e 4 \u6708 10 \u65e5\u53d1\u5e03 v2.1.101 \u4fee\u590d\u7f13\u5b58\u95ee\u9898\u3002\u590d\u76d8\u8fd8\u63d0\u5230 Opus 4.7 \u5728\u83b7\u5f97\u5b8c\u6574\u4ee3\u7801\u4ed3\u5e93\u4e0a\u4e0b\u6587\u65f6\uff0c\u80fd\u591f\u53d1\u73b0 4.6 \u7248\u672c\u9057\u6f0f\u7684 bug\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic&#039;s engineering team published a postmortem on recent Claude Code quality issues. A system prompt change adding length limits (\u226425 words between tool calls, \u2264100 words for final responses) to reduce verbosity, shipped with Opus 4.7 on April 16, unexpectedly degraded model intelligence. Additionally, a caching optimization incorrectly dropped prior reasoning from conversation history, affecting code review functionality. After user feedback, the team reverted Opus 4.7 default effort to xhigh on April 7 and fixed the caching bug in v2.1.101 on April 10. The postmortem noted Opus 4.7 could identify bugs missed by 4.6 when given complete repository context.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/engineering\/april-23-postmortem\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Scaling Managed Agents: Decoupling the brain from the hands<\/strong>\uff08Anthropic Engineering\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u5de5\u7a0b\u56e2\u961f\u53d1\u5e03\u300aScaling Managed Agents: Decoupling the brain from the hands\u300b\u6280\u672f\u535a\u5ba2\uff0c\u4ecb\u7ecd\u6258\u7ba1\u667a\u80fd\u4f53\u7cfb\u7edf\u7684\u8bbe\u8ba1\u7406\u5ff5\u3002\u8be5\u7cfb\u7edf\u91c7\u7528\u5143\u67b6\u6784\uff08meta-harness\uff09\u601d\u8def\uff0c\u901a\u8fc7\u901a\u7528\u63a5\u53e3\u5c06\u6a21\u578b\u667a\u80fd\u4e0e\u5177\u4f53\u6267\u884c\u5de5\u5177\u89e3\u8026\uff0c\u4f7f\u540c\u4e00\u5e95\u5c42\u6a21\u578b\u80fd\u591f\u9002\u914d\u4e0d\u540c\u573a\u666f\u7684\u667a\u80fd\u4f53\u6846\u67b6\uff08\u5982 Claude Code \u6216\u7279\u5b9a\u9886\u57df\u4e13\u7528\u6846\u67b6\uff09\u3002\u7cfb\u7edf\u63d0\u4f9b\u6301\u4e45\u5316\u4f1a\u8bdd\u5b58\u50a8\u3001\u4e8b\u4ef6\u83b7\u53d6\u4e0e\u8f6c\u6362\u3001\u4e0a\u4e0b\u6587\u7ba1\u7406\u7b49\u80fd\u529b\uff0c\u652f\u6301\u957f\u65f6\u8fd0\u884c\u4efb\u52a1\u3002\u6587\u7ae0\u4e3e\u4f8b\u8bf4\u660e\u4e0d\u540c\u6a21\u578b\u884c\u4e3a\u5dee\u5f02\uff1aSonnet 4.5 \u66fe\u51fa\u73b0&quot;\u4e0a\u4e0b\u6587\u7126\u8651&quot;\uff08\u63a5\u8fd1\u4e0a\u4e0b\u6587\u4e0a\u9650\u65f6\u8fc7\u65e9\u7ed3\u675f\u4efb\u52a1\uff09\uff0c\u9700\u901a\u8fc7\u67b6\u6784\u5c42\u6dfb\u52a0\u4e0a\u4e0b\u6587\u91cd\u7f6e\u89e3\u51b3\uff0c\u800c Opus 4.5 \u5219\u65e0\u6b64\u95ee\u9898\u3002Managed Agents \u4f5c\u4e3a\u6258\u7ba1\u670d\u52a1\uff0c\u65e8\u5728\u901a\u8fc7\u7a33\u5b9a\u7684\u63a5\u53e3\u62bd\u8c61\uff0c\u9002\u5e94\u672a\u6765\u4e0d\u65ad\u6f14\u8fdb\u7684\u6a21\u578b\u548c\u6846\u67b6\u5b9e\u73b0\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic&#039;s engineering team published a technical blog on &quot;Scaling Managed Agents: Decoupling the brain from the hands,&quot; introducing the design philosophy of their managed agent system. The meta-harness approach decouples model intelligence from execution tools through general interfaces, allowing the same underlying model to adapt to different agent frameworks (like Claude Code or domain-specific harnesses). The system provides durable session storage, event fetching and transformation, and context management for long-horizon tasks. The article illustrates model behavioral differences: Sonnet 4.5 exhibited &quot;context anxiety&quot; (prematurely wrapping up tasks when approaching context limits) requiring harness-level context resets, while Opus 4.5 did not. Managed Agents aims to provide stable interface abstractions that outlast evolving model and framework implementations.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/engineering\/managed-agents\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Microsoft says it has over 20M paid Copilot users, and they really are using it<\/strong>\uff08TechCrunch AI\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>\u5fae\u8f6f\u5ba3\u5e03 Microsoft 365 Copilot \u4ed8\u8d39\u7528\u6237\u7a81\u7834 2000 \u4e07\uff0c\u5e76\u5f3a\u8c03\u7528\u6237\u6d3b\u8dc3\u5ea6\u771f\u5b9e\u589e\u957f\u3002\u5c3d\u7ba1\u5916\u754c\u957f\u671f\u8d28\u7591 Copilot \u5b9e\u9645\u4f7f\u7528\u60c5\u51b5\uff0c\u5fae\u8f6f\u8868\u793a\u8be5\u529f\u80fd\u5728 Word\u3001Excel\u3001Outlook \u7b49\u5e94\u7528\u4e2d\u7684\u4f7f\u7528\u91cf\u6301\u7eed\u4e0a\u5347\u3002Agent \u6a21\u5f0f\u6210\u4e3a\u589e\u957f\u9a71\u52a8\u529b\uff0c\u76ee\u524d\u5df2\u4f5c\u4e3a Copilot \u53ca\u4e09\u5927\u529e\u516c\u5e94\u7528\u7684\u9ed8\u8ba4\u4f53\u9a8c\u3002\u5fae\u8f6f\u540c\u65f6\u5ba3\u5e03\u652f\u6301\u591a\u6a21\u578b\u7b56\u7565\uff0c\u7528\u6237\u53ef\u5728\u804a\u5929\u4e2d\u9ed8\u8ba4\u8bbf\u95ee\u591a\u4e2a\u6a21\u578b\uff0c\u901a\u8fc7\u667a\u80fd\u81ea\u52a8\u8def\u7531\u3001\u6279\u5224\u4e0e\u5efa\u8bae\u673a\u5236\u534f\u540c\u4f7f\u7528\u4e0d\u540c\u6a21\u578b\u751f\u6210\u6700\u4f18\u56de\u590d\u3002\u5fae\u8f6f\u5f3a\u8c03 Copilot \u4e0d\u4f9d\u8d56\u5355\u4e00\u6a21\u578b\uff08\u5982 OpenAI\uff09\uff0c\u5e76\u5df2\u5728\u5e73\u53f0\u4e2d\u652f\u6301 Anthropic Claude \u7b49\u5176\u4ed6\u6a21\u578b\u3002<\/p>\n<p><strong>English Summary:<\/strong> Microsoft announced that Microsoft 365 Copilot has surpassed 20 million paid users, emphasizing genuine engagement growth. Despite lingering skepticism about actual usage, Microsoft stated that usage within Word, Excel, and Outlook continues to rise. Agent mode is driving adoption and is now the default experience across Copilot and the three major Office apps. Microsoft also announced multi-model support, allowing users to access multiple models by default in chat, with intelligent auto-routing and critique-and-counsel mechanisms to combine models for optimal responses. The company emphasized that Copilot is not dependent on any single model like OpenAI, with Anthropic&#039;s Claude already supported on the platform.<\/p>\n<p><a href=\"https:\/\/techcrunch.com\/2026\/04\/29\/microsoft-says-it-has-over-20m-paid-copilot-users-and-they-really-are-using-it\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Extracting contract insights with PwC\u2019s AI-driven annotation on AWS<\/strong>\uff08AWS ML Blog\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>PwC\u4e0eAWS\u8054\u5408\u53d1\u5e03AI\u9a71\u52a8\u5408\u540c\u6807\u6ce8\u89e3\u51b3\u65b9\u6848AIDA\uff0c\u5229\u7528Amazon Bedrock\u5927\u8bed\u8a00\u6a21\u578b\u548cRAG\u6280\u672f\u5b9e\u73b0\u5408\u540c\u667a\u80fd\u5206\u6790\u3002\u8be5\u7cfb\u7edf\u652f\u6301\u6a21\u677f\u5316\u6570\u636e\u63d0\u53d6\u3001\u5355\u6587\u6863\u5bf9\u8bdd\u95ee\u7b54\u548c\u8de8\u6587\u6863\u5168\u5c40\u641c\u7d22\u4e09\u5927\u6838\u5fc3\u529f\u80fd\uff0c\u53ef\u5c06\u5408\u540c\u5ba1\u67e5\u65f6\u95f4\u7f29\u77ed\u9ad8\u8fbe90%\u3002\u67b6\u6784\u4e0a\u91c7\u7528Amazon ECS\u3001S3\u3001RDS\u3001OpenSearch Serverless\u7b49\u4e91\u539f\u751f\u670d\u52a1\uff0c\u7ed3\u5408OCR\u3001\u5411\u91cf\u68c0\u7d22\u548cLLM\u63a8\u7406\uff0c\u4e3a\u6cd5\u5f8b\u3001\u5408\u89c4\u548c\u91c7\u8d2d\u56e2\u961f\u63d0\u4f9b\u53ef\u6eaf\u6e90\u3001\u53ef\u9a8c\u8bc1\u7684\u5408\u540c\u6d1e\u5bdf\u3002\u67d0\u5927\u578b\u5f71\u89c6\u5de5\u4f5c\u5ba4\u5e94\u7528\u540e\uff0c\u7248\u6743\u7814\u7a76\u65f6\u95f4\u51cf\u5c1190%\uff0c\u5c55\u793a\u4e86\u8be5\u65b9\u6848\u5728\u5a92\u4f53\u5a31\u4e50\u3001\u623f\u5730\u4ea7\u7b49\u884c\u4e1a\u7684\u89c4\u6a21\u5316\u5e94\u7528\u6f5c\u529b\u3002<\/p>\n<p><strong>English Summary:<\/strong> PwC and AWS co-launched AIDA, an AI-driven contract annotation solution leveraging Amazon Bedrock LLMs and RAG to extract structured insights from legal documents. The system offers template-based extraction, document-level chat, and global cross-document search, reducing manual contract review time by up to 90%. Built on cloud-native AWS services including ECS, S3, RDS, and OpenSearch Serverless, AIDA combines OCR, vector retrieval, and LLM reasoning to provide traceable, verifiable contract intelligence for legal, compliance, and procurement teams. A major film studio achieved 90% reduction in rights research time, demonstrating scalability across media, entertainment, and real estate sectors.<\/p>\n<p><a href=\"https:\/\/aws.amazon.com\/blogs\/machine-learning\/extracting-contract-insights-with-pwcs-ai-driven-annotation-on-aws\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Building the compute infrastructure for the Intelligence Age<\/strong>\uff08OpenAI News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>OpenAI\u5ba3\u5e03\u5176Stargate\u57fa\u7840\u8bbe\u65bd\u9879\u76ee\u5df2\u63d0\u524d\u5b8c\u6210\u539f\u5b9a2029\u5e74\u768410GW\u7b97\u529b\u76ee\u6807\uff0c\u8fc7\u53bb90\u5929\u65b0\u589e\u8d853GW\u5bb9\u91cf\u3002Stargate\u662fOpenAI\u4e3a\u6784\u5efa\u901a\u7528\u4eba\u5de5\u667a\u80fd\u6240\u9700\u7b97\u529b\u57fa\u7840\u800c\u8bbe\u7acb\u7684\u957f\u671f\u9879\u76ee\uff0c\u6700\u65b0\u65d7\u8230\u6a21\u578bGPT-5.5\u5373\u5728\u5f97\u5ddeAbilene\u6570\u636e\u4e2d\u5fc3\u8bad\u7ec3\u5b8c\u6210\u3002OpenAI\u5f3a\u8c03\u7b97\u529b\u662fAI\u53d1\u5c55\u7684\u6838\u5fc3\u8f93\u5165\uff0c\u66f4\u591a\u7b97\u529b\u652f\u6301\u66f4\u5f3a\u6a21\u578b\u8bad\u7ec3\u3001\u66f4\u53ef\u9760\u7684\u670d\u52a1\u548c\u66f4\u4f4e\u7684\u667a\u80fd\u4ea4\u4ed8\u6210\u672c\u3002\u516c\u53f8\u91c7\u7528\u5408\u4f5c\u4f19\u4f34\u6a21\u5f0f\u63a8\u8fdb\uff0c\u4e0eOracle\u3001Vantage\u7b49\u5408\u4f5c\u5efa\u8bbe\u6570\u636e\u4e2d\u5fc3\uff0c\u5e76\u627f\u8bfa\u4e3a\u5f53\u5730\u793e\u533a\u521b\u9020\u5c31\u4e1a\u3001\u6559\u80b2\u6295\u8d44\u548c\u8d1f\u8d23\u4efb\u7684\u6c34\u8d44\u6e90\u7ba1\u7406\u3002<\/p>\n<p><strong>English Summary:<\/strong> OpenAI announced its Stargate infrastructure project has already surpassed its original 10GW compute target set for 2029, with over 3GW added in the past 90 days alone. Stargate is OpenAI&#039;s long-term initiative to build the compute foundation required for AGI, with its latest flagship model GPT-5.5 trained at the Abilene, Texas facility. The company emphasizes compute as the critical input enabling better model training, more reliable serving, and lower intelligence delivery costs over time. OpenAI pursues a partner-centric approach with Oracle, Vantage, and others for data center construction, while committing to local job creation, educational investments, and responsible water stewardship in host communities.<\/p>\n<p><a href=\"https:\/\/openai.com\/index\/building-the-compute-infrastructure-for-the-intelligence-age\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Presentation: Agents, Architecture, &amp; Amnesia: Becoming AI-Native Without Losing Our Minds<\/strong>\uff08InfoQ AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>InfoQ\u53d1\u5e03Tracy Bannon\u7684\u6f14\u8bb2\u300aAgents, Architecture &amp; Amnesia\u300b\uff0c\u4ee5\u300a\u9b54\u6cd5\u5e08\u7684\u5b66\u5f92\u300b\u5bd3\u8a00\u8b66\u793a\u65e0\u8282\u5236AI\u81ea\u4e3b\u6027\u7684\u98ce\u9669\u3002\u6f14\u8bb2\u63a2\u8ba8\u4ece\u673a\u5668\u4eba\u5230\u81ea\u4e3b\u667a\u80fd\u4f53\u7684\u6f14\u8fdb\uff0c\u6307\u51fa\u76f2\u76ee\u8ffd\u6c42\u901f\u5ea6\u4f1a\u5bfc\u81f4&quot;\u67b6\u6784\u5931\u5fc6\u75c7&quot;\u2014\u2014\u7ec4\u7ec7\u5728\u5feb\u901f\u91c7\u7528AI\u7684\u8fc7\u7a0b\u4e2d\u4e27\u5931\u5bf9\u7cfb\u7edf\u8bbe\u8ba1\u548c\u51b3\u7b56\u903b\u8f91\u7684\u8ffd\u8e2a\u80fd\u529b\u3002Bannon\u5f3a\u8c03\u5728\u6210\u4e3aAI\u539f\u751f\u4f01\u4e1a\u7684\u8fc7\u7a0b\u4e2d\uff0c\u5fc5\u987b\u4fdd\u6301\u67b6\u6784\u6cbb\u7406\u548c\u4eba\u5de5\u76d1\u7763\uff0c\u907f\u514d\u8fc7\u5ea6\u81ea\u52a8\u5316\u5e26\u6765\u7684\u4e0d\u53ef\u63a7\u540e\u679c\u3002\u8be5\u6f14\u8bb2\u4e3a\u6b63\u5728\u90e8\u7f72AI\u667a\u80fd\u4f53\u7684\u4f01\u4e1a\u63d0\u4f9b\u4e86\u5173\u4e8e\u81ea\u4e3b\u6027\u4e0e\u53ef\u63a7\u6027\u5e73\u8861\u7684\u91cd\u8981\u601d\u8003\u6846\u67b6\u3002<\/p>\n<p><strong>English Summary:<\/strong> InfoQ published Tracy Bannon&#039;s presentation &quot;Agents, Architecture &amp; Amnesia,&quot; using the Sorcerer&#039;s Apprentice fable to illustrate risks of unbridled AI autonomy. The talk explores the evolution from bots to autonomous agents, warning that reckless speed leads to &quot;Architectural Amnesia&quot;\u2014where organizations lose track of system design and decision logic while rapidly adopting AI. Bannon emphasizes maintaining architectural governance and human oversight when becoming AI-native, avoiding uncontrollable consequences from excessive automation.<\/p>\n<p><a href=\"https:\/\/www.infoq.com\/presentations\/ai-autonomy-continuum\/?utm_campaign=infoq_content&#038;utm_source=infoq&#038;utm_medium=feed&#038;utm_term=AI%2C+ML+%26+Data+Engineering\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Cybersecurity in the Intelligence Age<\/strong>\uff08OpenAI News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>OpenAI\u53d1\u5e03\u300a\u667a\u80fd\u65f6\u4ee3\u7684\u7f51\u7edc\u5b89\u5168\u300b\u884c\u52a8\u8ba1\u5212\uff0c\u63d0\u51fa\u4e94\u5927\u652f\u67f1\u5e94\u5bf9AI\u9a71\u52a8\u7684\u7f51\u7edc\u5a01\u80c1\uff1a\u666e\u53caAI\u7f51\u7edc\u9632\u5fa1\u5de5\u5177\u3001\u52a0\u5f3a\u653f\u5e9c\u4e0e\u884c\u4e1a\u534f\u8c03\u3001\u5f3a\u5316\u524d\u6cbf\u7f51\u7edc\u5b89\u5168\u80fd\u529b\u7ba1\u63a7\u3001\u4fdd\u6301\u90e8\u7f72\u53ef\u89c1\u6027\u4e0e\u63a7\u5236\u3001\u8d4b\u80fd\u7528\u6237\u81ea\u6211\u4fdd\u62a4\u3002OpenAI\u6307\u51faAI\u6b63\u5728\u91cd\u5851\u7f51\u7edc\u5b89\u5168\u683c\u5c40\uff0c\u9632\u5fa1\u8005\u548c\u653b\u51fb\u8005\u90fd\u5728\u5229\u7528AI\u80fd\u529b\uff0c\u56e0\u6b64\u9700\u8981\u4e0e\u8054\u90a6\u548c\u5dde\u653f\u5e9c\u53ca\u5546\u4e1a\u5b9e\u4f53\u5408\u4f5c\uff0c\u901a\u8fc7\u6c11\u4e3b\u5236\u5ea6\u548c\u6d41\u7a0b\u5efa\u7acb\u97e7\u6027\uff0c\u540c\u65f6\u6269\u5927\u53ef\u4fe1\u4e3b\u4f53\u83b7\u53d6\u9632\u5fa1\u6280\u672f\u7684\u6e20\u9053\u3002\u5b8c\u6574\u8ba1\u5212\u5df2\u4ee5PDF\u5f62\u5f0f\u516c\u5f00\u53d1\u5e03\u3002<\/p>\n<p><strong>English Summary:<\/strong> OpenAI published a &quot;Cybersecurity in the Intelligence Age&quot; action plan outlining five pillars to address AI-driven cyber threats: democratizing cyber defense, coordinating across government and industry, strengthening security around frontier cyber capabilities, preserving visibility and control in deployment, and enabling users to protect themselves. OpenAI notes AI is reshaping cybersecurity, with both defenders and attackers leveraging AI capabilities, necessitating collaboration with federal, state, and commercial entities. The plan emphasizes building resilience through democratic institutions while broadening access to defensive technologies for trusted actors. The complete plan is publicly available as a PDF.<\/p>\n<p><a href=\"https:\/\/openai.com\/index\/cybersecurity-in-the-intelligence-age\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>[AINews] not much happened today<\/strong>\uff08Latent Space\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Latent Space\u7684AINews\u680f\u76ee\u627f\u8ba4\u5f53\u65e5AI\u65b0\u95fb\u76f8\u5bf9\u5e73\u6de1\uff0c\u4f46\u6c47\u603b\u4e86\u503c\u5f97\u5173\u6ce8\u7684\u6280\u672f\u52a8\u6001\uff1avLLM 0.20\u53d1\u5e03\u5e26\u6765TurboQuant 2-bit KV\u7f13\u5b58\u3001DeepSeek V4 MegaMoE\u652f\u6301\u7b49\u63a8\u7406\u4f18\u5316\uff1bPoolside\u5f00\u6e9033B MoE\u4ee3\u7801\u6a21\u578bLaguna XS.2\uff0c\u53ef\u5728\u5355\u5361\u8fd0\u884c\uff1bNVIDIA\u53d1\u5e0330B\u591a\u6a21\u6001MoE\u6a21\u578bNemotron 3 Nano Omni\uff0c\u652f\u6301256K\u4e0a\u4e0b\u6587\u548c\u56fe\u6587\u97f3\u89c6\u9891\u7406\u89e3\uff0c\u83b7\u4e3b\u6d41\u5e73\u53f0\u540c\u65e5\u4e0a\u7ebf\uff1bMistral\u63a8\u51faWorkflows\u9884\u89c8\u7248\uff0c\u805a\u7126\u4f01\u4e1a\u7ea7\u667a\u80fd\u4f53\u7f16\u6392\uff1b\u672c\u5730\u79bb\u7ebf\u667a\u80fd\u4f53\u65b9\u6848\u65e5\u8d8b\u6210\u719f\uff0cHugging Face\u3001Gemma\u7b49\u63a8\u52a8\u7aef\u4fa7\u90e8\u7f72\u3002\u6b64\u5916\uff0cGPT-5.5 Pro\u5728Epoch Capabilities Index\u8fbe\u5230159\u5206\uff0cFrontierMath Tier 4\u89e3\u9898\u7387\u8fbe40%\u3002<\/p>\n<p><strong>English Summary:<\/strong> Latent Space&#039;s AINews acknowledged a quiet day in AI but highlighted notable developments: vLLM 0.20 released with TurboQuant 2-bit KV cache and DeepSeek V4 MegaMoE support for inference optimization; Poolside open-sourced Laguna XS.2, a 33B MoE coding model runnable on single GPU; NVIDIA launched Nemotron 3 Nano Omni, a 30B multimodal MoE with 256K context supporting text, image, video, and audio, with same-day availability across major platforms; Mistral introduced Workflows preview for enterprise agent orchestration; local offline agent solutions matured with Hugging Face and Gemma pushing on-device deployment. Additionally, GPT-5.5 Pro scored 159 on Epoch Capabilities Index with 40% on FrontierMath Tier 4.<\/p>\n<p><a href=\"https:\/\/www.latent.space\/p\/ainews-not-much-happened-today\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>[AINews] ImageGen is on the Path to AGI<\/strong>\uff08Latent Space\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Latent Space \u7684 AINews \u680f\u76ee\u63a2\u8ba8\u4e86 GPT-Image-2 \u5728\u56fe\u50cf\u751f\u6210\u9886\u57df\u7684\u6301\u7eed\u7206\u53d1\uff0c\u8ba4\u4e3a\u9ad8\u8d28\u91cf\u7684\u56fe\u50cf\u751f\u6210\u80fd\u529b\u662f\u5b9e\u73b0 AGI \u7684\u5fc5\u8981\u7ec4\u6210\u90e8\u5206\u3002\u6587\u7ae0\u6307\u51fa\uff0c\u5c3d\u7ba1\u5404\u5927\u5b9e\u9a8c\u5ba4\u90fd\u5728\u7ade\u76f8\u6a21\u4eff Anthropic \u4e13\u6ce8\u4e8e\u7f16\u7a0b\u548c\u4f01\u4e1a AI \u7684\u65b9\u5411\uff0c\u4f46 GPT-Image-2 \u5728\u521b\u610f\u5e94\u7528\u3001\u6559\u80b2\u5185\u5bb9\u3001\u6d41\u884c\u6587\u5316\u548c\u4fe1\u606f\u56fe\u8868\u751f\u6210\u65b9\u9762\u5c55\u73b0\u51fa\u72ec\u7279\u4ef7\u503c\u3002\u7279\u522b\u662f\u5f53\u56fe\u50cf\u751f\u6210\u4e0e Codex \u7f16\u7801\u4ee3\u7406\u7ed3\u5408\u65f6\uff0c\u5f00\u53d1\u8005\u53ef\u4ee5\u5728\u7f16\u7801\u8fc7\u7a0b\u4e2d\u5b9e\u65f6\u751f\u6210\u6240\u9700\u7d20\u6750\uff0c\u5f62\u6210&quot;\u95ed\u73af&quot;\u5de5\u4f5c\u6d41\u3002\u6587\u7ae0\u8fd8\u63d0\u5230 Nano Banana\u3001Grok Imagine \u7b49\u6a21\u578b\u7684\u8fdb\u5c55\uff0c\u5f3a\u8c03\u591a\u6a21\u6001\u80fd\u529b\uff08\u8bed\u97f3\u548c\u89c6\u89c9\u751f\u6210\uff09\u5bf9\u4e8e\u5b9e\u73b0\u771f\u6b63\u7684\u901a\u7528\u4eba\u5de5\u667a\u80fd\u81f3\u5173\u91cd\u8981\u3002<\/p>\n<p><strong>English Summary:<\/strong> Latent Space&#039;s AINews discusses the continued explosion of GPT-Image-2 in image generation, arguing that high-quality image generation is a necessary component for achieving AGI. While labs race to emulate Anthropic&#039;s coding and enterprise AI focus, GPT-Image-2 demonstrates unique value in creative applications, educational content, pop culture, and infographic generation. When combined with Codex coding agents, developers can generate assets in real-time during coding, creating a &quot;closed-loop&quot; workflow. The article also covers progress on Nano Banana and Grok Imagine, emphasizing that multimodal capabilities (voice and visual generation) are essential for true artificial general intelligence.<\/p>\n<p><a href=\"https:\/\/www.latent.space\/p\/ainews-imagegen-is-on-the-path-to\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Reading today&#039;s open-closed performance gap<\/strong>\uff08Interconnects\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Nathan Lambert \u5728 Interconnects \u535a\u5ba2\u4e2d\u6df1\u5165\u5206\u6790\u4e86\u5f00\u6e90\u4e0e\u95ed\u6e90\u6a21\u578b\u4e4b\u95f4\u7684\u6027\u80fd\u5dee\u8ddd\uff0c\u6307\u51fa\u5355\u7eaf\u7528\u4e00\u4e2a\u6570\u5b57\u6765\u8861\u91cf\u8fd9\u79cd\u5dee\u8ddd\u4f1a\u63a9\u76d6\u8bb8\u591a\u5173\u952e\u52a8\u6001\u3002\u6587\u7ae0\u8ba8\u8bba\u4e86\u5f71\u54cd\u8bc4\u4f30\u7ed3\u679c\u7684\u590d\u6742\u56e0\u7d20\uff0c\u5305\u62ec\u57fa\u51c6\u6d4b\u8bd5\u968f\u65f6\u95f4\u7684\u6f14\u53d8\u3001\u6a21\u578b\u5b9e\u9645\u6027\u80fd\u4e0e\u6392\u540d\u4e4b\u95f4\u7684\u5173\u7cfb\uff0c\u4ee5\u53ca\u8bad\u7ec3\u65b9\u6cd5\u7684\u53d8\u5316\u3002\u4f5c\u8005\u8ba4\u4e3a\u5f53\u524d\u884c\u4e1a\u6b63\u5904\u4e8e\u4ee5\u590d\u6742\u7f16\u7a0b\u548c\u7ec8\u7aef\u4efb\u52a1\u4e3a\u91cd\u70b9\u7684\u65f6\u4ee3\u672b\u671f\uff0c\u524d\u6cbf\u5b9e\u9a8c\u5ba4\u6b63\u6295\u5165\u5de8\u989d\u8d44\u91d1\u638c\u63e1\u8fd9\u4e9b\u9886\u57df\uff0c\u540c\u65f6\u5f00\u59cb\u5411\u4f1a\u8ba1\u3001\u6cd5\u5f8b\u3001\u533b\u7597\u7b49\u4e13\u4e1a\u77e5\u8bc6\u5de5\u4f5c\u62d3\u5c55\u3002\u5f00\u6e90\u6a21\u578b\uff08\u5c24\u5176\u662f\u4e2d\u56fd\u5b9e\u9a8c\u5ba4\u7684\u6a21\u578b\uff09\u867d\u7136\u5728\u8ffd\u8d76\uff0c\u4f46\u5728\u9700\u8981\u79c1\u6709\u6570\u636e\u548c\u590d\u6742\u73af\u5883\u7684\u9886\u57df\u53ef\u80fd\u96be\u4ee5\u8ddf\u4e0a\uff0c\u56e0\u4e3a\u524d\u6cbf\u5b9e\u9a8c\u5ba4\u901a\u8fc7\u8d2d\u4e70\u65b0\u73af\u5883\u548c\u6570\u636e\u96c6\u5efa\u7acb\u4e86\u7c7b\u4f3c\u82af\u7247\u5de5\u5382\u7684\u7ade\u4e89\u4f18\u52bf\u3002<\/p>\n<p><strong>English Summary:<\/strong> Nathan Lambert&#039;s Interconnects blog provides an in-depth analysis of the performance gap between open and closed models, arguing that reducing this gap to a single number obscures crucial dynamics. The article discusses complex factors affecting evaluation results, including benchmark evolution over time, the relationship between model performance and rankings, and changes in training methodologies. The author suggests the industry is at the end of an era focused on complex coding and terminal tasks, with frontier labs investing heavily while expanding into specialized domains like accounting, law, and healthcare. While open models (particularly from Chinese labs) are catching up, they may struggle in areas requiring private data and complex environments, as frontier labs build competitive advantages through acquiring new environments and datasets.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/reading-todays-open-closed-performance\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Building an emoji list generator with the GitHub Copilot CLI<\/strong>\uff08GitHub AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>GitHub \u535a\u5ba2\u5206\u4eab\u4e86\u5728 Rubber Duck Thursday \u76f4\u64ad\u4e2d\u4f7f\u7528 GitHub Copilot CLI \u6784\u5efa\u8868\u60c5\u7b26\u53f7\u5217\u8868\u751f\u6210\u5668\u7684\u5b9e\u8df5\u6848\u4f8b\u3002\u8be5\u9879\u76ee\u662f\u4e00\u4e2a\u7ec8\u7aef\u5e94\u7528\uff0c\u7528\u6237\u53ef\u4ee5\u7c98\u8d34\u6216\u8f93\u5165\u9879\u76ee\u5217\u8868\uff0c\u901a\u8fc7 AI \u667a\u80fd\u5339\u914d\u76f8\u5173\u8868\u60c5\u7b26\u53f7\uff0c\u5e76\u5c06\u7ed3\u679c\u590d\u5236\u5230\u526a\u8d34\u677f\u3002\u5f00\u53d1\u8fc7\u7a0b\u91c7\u7528\u4e86 Plan \u6a21\u5f0f\u8fdb\u884c\u9700\u6c42\u89c4\u5212\u548c\u67b6\u6784\u8bbe\u8ba1\uff0c\u7136\u540e\u4f7f\u7528 Claude Opus 4.7 \u5b9e\u73b0\u4ee3\u7801\u3002\u6280\u672f\u6808\u5305\u62ec OpenTUI \u7528\u4e8e\u7ec8\u7aef\u754c\u9762\u3001GitHub Copilot SDK \u63d0\u4f9b AI \u80fd\u529b\u3001clipboardy \u5904\u7406\u526a\u8d34\u677f\u529f\u80fd\u3002\u6587\u7ae0\u5c55\u793a\u4e86 Copilot CLI \u7684\u591a\u9879\u7279\u6027\uff0c\u5305\u62ec Plan \u6a21\u5f0f\u3001Autopilot \u6a21\u5f0f\u3001\u591a\u6a21\u578b\u5de5\u4f5c\u6d41\u3001allow-all \u5de5\u5177\u6807\u5fd7\u4ee5\u53ca GitHub MCP \u670d\u52a1\u5668\u7684\u96c6\u6210\u3002<\/p>\n<p><strong>English Summary:<\/strong> GitHub Blog shares a practical case of building an emoji list generator using GitHub Copilot CLI during the Rubber Duck Thursday livestream. The project is a terminal application where users can paste or input a list of items, which AI intelligently matches with relevant emojis and copies the result to the clipboard. The development process used Plan mode for requirements planning and architecture design, then implemented code with Claude Opus 4.7. The tech stack includes OpenTUI for terminal UI, GitHub Copilot SDK for AI capabilities, and clipboardy for clipboard functionality. The article showcases several Copilot CLI features including Plan mode, Autopilot mode, multi-model workflows, the allow-all tools flag, and GitHub MCP server integration.<\/p>\n<p><a href=\"https:\/\/github.blog\/ai-and-ml\/github-copilot\/building-an-emoji-list-generator-with-the-github-copilot-cli\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Build a personal organization command center with GitHub Copilot CLI<\/strong>\uff08GitHub AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>GitHub \u535a\u5ba2\u91c7\u8bbf\u4e86\u5de5\u7a0b\u5e08 Brittany Ellich\uff0c\u4ecb\u7ecd\u5979\u4f7f\u7528 GitHub Copilot CLI \u6784\u5efa\u7684\u4e2a\u4eba\u7ec4\u7ec7\u6307\u6325\u4e2d\u5fc3\u9879\u76ee\u3002\u8be5\u9879\u76ee\u65e8\u5728\u89e3\u51b3\u6570\u5b57\u4fe1\u606f\u5206\u6563\u7684\u95ee\u9898\uff0c\u5c06\u5206\u6563\u5728\u5341\u51e0\u4e2a\u4e0d\u540c\u5e94\u7528\u4e2d\u7684\u5185\u5bb9\u7edf\u4e00\u5230\u4e00\u4e2a\u96c6\u4e2d\u7684\u7a7a\u95f4\u4e2d\u3002Brittany \u91c7\u7528&quot;\u5148\u89c4\u5212\u540e\u5b9e\u65bd&quot;\u7684\u5de5\u4f5c\u6d41\u7a0b\uff0c\u5229\u7528 AI \u8fdb\u884c\u89c4\u5212\uff0c\u4f7f\u7528 Copilot \u8fdb\u884c\u5b9e\u73b0\uff0c\u4ec5\u7528\u4e00\u5929\u65f6\u95f4\u5c31\u5b8c\u6210\u4e86\u7b2c\u4e00\u4e2a\u53ef\u7528\u7248\u672c\u3002\u5979\u8be6\u7ec6\u4ecb\u7ecd\u4e86\u5f00\u53d1\u65b9\u6cd5\uff1aCopilot \u901a\u8fc7\u63d0\u95ee\u6765\u660e\u786e\u9700\u6c42\uff0c\u76f4\u5230\u5f62\u6210\u5145\u5206\u7684\u5b9e\u65bd\u8ba1\u5212\uff0c\u4ece\u800c\u51cf\u5c11\u731c\u6d4b\u5e76\u63d0\u9ad8\u5f00\u53d1\u6548\u7387\u3002\u5979\u5e38\u7528\u7684\u5de5\u5177\u6808\u5305\u62ec VS Code \u7684 Agent \u6a21\u5f0f\u8fdb\u884c\u540c\u6b65\u5f00\u53d1\uff0c\u4ee5\u53ca Copilot Cloud Agent \u8fdb\u884c\u5f02\u6b65\u5f00\u53d1\u3002\u6587\u7ae0\u5f3a\u8c03\uff0c\u4ece\u5934\u5f00\u59cb\u6784\u5efa\u89e3\u51b3\u65b9\u6848\u4ece\u672a\u5982\u6b64\u7b80\u5355\uff0c\u8fd9\u662f\u5b66\u4e60\u4f7f\u7528\u65b0 AI \u5de5\u5177\u7684\u7edd\u4f73\u65b9\u5f0f\u3002<\/p>\n<p><strong>English Summary:<\/strong> GitHub Blog interviews engineer Brittany Ellich about her personal organization command center project built with GitHub Copilot CLI. The project aims to solve digital fragmentation by unifying content scattered across a dozen different apps into one centralized space. Brittany adopted a &quot;plan-then-implement&quot; workflow, using AI for planning and Copilot for implementation, completing the first working version in just one day. She details her development approach: Copilot interviews her with questions to clarify requirements until an adequate implementation plan is formed, reducing guesswork and improving efficiency. Her preferred tool stack includes VS Code&#039;s Agent mode for synchronous development and Copilot Cloud Agent for asynchronous tasks. The article emphasizes that building solutions from scratch has never been easier and is an excellent way to learn new AI tools.<\/p>\n<p><a href=\"https:\/\/github.blog\/ai-and-ml\/github-copilot\/build-a-personal-organization-command-center-with-github-copilot-cli\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Ollama is now powered by MLX on Apple Silicon in preview<\/strong>\uff08Ollama Blog\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Ollama \u5b98\u65b9\u535a\u5ba2\u5ba3\u5e03\u5728 Apple Silicon \u4e0a\u63a8\u51fa\u57fa\u4e8e MLX \u6846\u67b6\u7684\u9884\u89c8\u7248\u672c\uff0c\u8fd9\u662f\u76ee\u524d\u5728\u82f9\u679c\u82af\u7247\u4e0a\u8fd0\u884c Ollama \u7684\u6700\u5feb\u65b9\u5f0f\u3002\u65b0\u7248\u672c\u5229\u7528\u82f9\u679c\u7edf\u4e00\u5185\u5b58\u67b6\u6784\uff0c\u5728\u6240\u6709 Apple Silicon \u8bbe\u5907\u4e0a\u5b9e\u73b0\u663e\u8457\u52a0\u901f\uff0c\u5728 M5\u3001M5 Pro \u548c M5 Max \u82af\u7247\u4e0a\u66f4\u662f\u5229\u7528\u65b0\u7684 GPU \u795e\u7ecf\u52a0\u901f\u5668\u6765\u52a0\u901f\u9996 token \u65f6\u95f4\u548c\u751f\u6210\u901f\u5ea6\u3002Ollama 0.19 \u8fd8\u5f15\u5165\u4e86\u5bf9 NVIDIA NVFP4 \u683c\u5f0f\u7684\u652f\u6301\uff0c\u5728\u4fdd\u6301\u6a21\u578b\u7cbe\u5ea6\u7684\u540c\u65f6\u51cf\u5c11\u5185\u5b58\u5e26\u5bbd\u548c\u5b58\u50a8\u9700\u6c42\uff0c\u4f7f\u672c\u5730\u8fd0\u884c\u7ed3\u679c\u4e0e\u751f\u4ea7\u73af\u5883\u4fdd\u6301\u4e00\u81f4\u3002\u6b64\u5916\uff0c\u7f13\u5b58\u7cfb\u7edf\u5f97\u5230\u5347\u7ea7\uff0c\u5305\u62ec\u8de8\u5bf9\u8bdd\u91cd\u7528\u7f13\u5b58\u3001\u667a\u80fd\u68c0\u67e5\u70b9\u5b58\u50a8\u548c\u66f4\u667a\u80fd\u7684\u6dd8\u6c70\u7b56\u7565\uff0c\u4f7f\u7f16\u7801\u548c\u4ee3\u7406\u4efb\u52a1\u66f4\u52a0\u9ad8\u6548\u3002\u8be5\u7248\u672c\u7279\u522b\u9488\u5bf9 Qwen3.5-35B-A3B \u6a21\u578b\u8fdb\u884c\u4e86\u4f18\u5316\uff0c\u9002\u7528\u4e8e Claude Code\u3001OpenClaw \u7b49\u7f16\u7801\u4ee3\u7406\u573a\u666f\u3002<\/p>\n<p><strong>English Summary:<\/strong> Ollama&#039;s official blog announces a preview version powered by Apple&#039;s MLX framework on Apple Silicon, representing the fastest way to run Ollama on Apple chips. The new version leverages Apple&#039;s unified memory architecture for significant acceleration across all Apple Silicon devices, with M5, M5 Pro, and M5 Max chips utilizing new GPU Neural Accelerators to speed up time-to-first-token and generation speed. Ollama 0.19 also introduces support for NVIDIA&#039;s NVFP4 format, maintaining model accuracy while reducing memory bandwidth and storage requirements, ensuring local results match production environments. Additionally, the caching system has been upgraded with cross-conversation cache reuse, intelligent checkpoint storage, and smarter eviction policies, making coding and agent tasks more efficient. This version is specifically optimized for the Qwen3.5-35B-A3B model, suitable for coding agents like Claude Code and OpenClaw.<\/p>\n<p><a href=\"https:\/\/ollama.com\/blog\/mlx\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>\u65e5\u671f\uff1a2026-04-30 \u672c\u671f\u805a\u7126\uff1a\u91cd\u70b9\u5173\u6ce8\u6a21\u578b\u53d1\u5e03\u4e0e release notes\u3001\u5b98\u65b9 engineeri [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[],"class_list":["post-367","post","type-post","status-publish","format-standard","hentry","category-ai-daily"],"_links":{"self":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts\/367","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=367"}],"version-history":[{"count":0,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts\/367\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=367"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=367"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=367"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}