{"id":385,"date":"2026-05-08T07:31:20","date_gmt":"2026-05-07T23:31:20","guid":{"rendered":"http:\/\/www.faiyi.com\/?p=385"},"modified":"2026-05-08T07:31:20","modified_gmt":"2026-05-07T23:31:20","slug":"ai%e5%8a%a8%e6%80%81%e6%af%8f%e6%97%a5%e7%ae%80%e6%8a%a5-2026-05-08","status":"publish","type":"post","link":"http:\/\/www.faiyi.com\/?p=385","title":{"rendered":"AI\u52a8\u6001\u6bcf\u65e5\u7b80\u62a5 2026-05-08"},"content":{"rendered":"<p>\u65e5\u671f\uff1a2026-05-08<\/p>\n<p>\u672c\u671f\u805a\u7126\uff1a\u91cd\u70b9\u5173\u6ce8\u6a21\u578b\u53d1\u5e03\u4e0e release notes\u3001\u5b98\u65b9 engineering blog\u3001AI coding \/ agent \/ SRE\u3001\u8bc4\u6d4b\u699c\u5355\u53d8\u5316\u3001\u5f00\u53d1\u8005\u5b9e\u8df5\u535a\u5ba2\u3001\u6846\u67b6\u751f\u6001\u3001\u5f00\u6e90\u6a21\u578b\u4e0e\u771f\u5b9e\u7528\u6237\u89c6\u89d2\uff1b\u5f53 HN\u3001Reddit\u3001Hugging Face \u7b49\u793e\u533a\u6e90\u53ef\u8bbf\u95ee\u65f6\u4f18\u5148\u7eb3\u5165\u3002<\/p>\n<hr \/>\n<ol>\n<li>\n<p><strong>Artificial Analysis \u6700\u65b0\u6a21\u578b\u6392\u540d\u89c2\u5bdf<\/strong>\uff08Artificial Analysis\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Artificial Analysis \u66f4\u65b0\u4e86\u5176\u6a21\u578b\u667a\u80fd\u6307\u6570\u6392\u540d\uff0c\u76ee\u524d GPT-5.5 (xhigh) \u4ee5 60 \u5206\u4f4d\u5c45\u699c\u9996\uff0c\u7d27\u968f\u5176\u540e\u7684\u662f GPT-5.5 (high) \u548c Claude Opus 4.7 (max)\u3002\u8be5\u6307\u6570 v4.0 \u7248\u672c\u6574\u5408\u4e86 10 \u9879\u72ec\u7acb\u8bc4\u6d4b\uff0c\u6db5\u76d6 GDPval-AA\u3001Terminal-Bench Hard\u3001Humanity&#039;s Last Exam \u7b49\u3002\u5728\u901f\u5ea6\u4e0e\u4ef7\u683c\u65b9\u9762\uff0cMercury 2 \u4ee5\u6bcf\u79d2 689 \u4e2a token \u9886\u5148\uff0c\u800c Qwen3.5 0.8B \u5219\u4ee5\u6bcf\u767e\u4e07 token \u4ec5 0.02 \u7f8e\u5143\u6210\u4e3a\u6700\u5177\u6027\u4ef7\u6bd4\u7684\u9009\u62e9\u3002\u6b64\u5916\uff0cLlama 4 Scout \u62e5\u6709 1000 \u4e07 token \u7684\u4e0a\u4e0b\u6587\u7a97\u53e3\uff0c\u4f4d\u5c45\u7b2c\u4e00\u3002<\/p>\n<p><strong>English Summary:<\/strong> Artificial Analysis updated its Intelligence Index rankings, with GPT-5.5 (xhigh) leading at 60 points, followed by GPT-5.5 (high) and Claude Opus 4.7 (max). The v4.0 index incorporates 10 evaluations including GDPval-AA, Terminal-Bench Hard, and Humanity&#039;s Last Exam. For speed, Mercury 2 leads at 689 tokens per second, while Qwen3.5 0.8B is the most affordable at $0.02 per million tokens. Llama 4 Scout offers the largest context window at 10 million tokens.<\/p>\n<p><a href=\"https:\/\/artificialanalysis.ai\/models\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Introducing Claude Opus 4.7<\/strong>\uff08Anthropic News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u6b63\u5f0f\u53d1\u5e03 Claude Opus 4.7\uff0c\u5728\u9ad8\u7ea7\u8f6f\u4ef6\u5de5\u7a0b\u4efb\u52a1\u4e0a\u8f83\u524d\u4ee3 Opus 4.6 \u6709\u663e\u8457\u63d0\u5347\uff0c\u5c24\u5176\u5728\u5904\u7406\u6700\u56f0\u96be\u7684\u7f16\u7801\u4efb\u52a1\u65f6\u8868\u73b0\u66f4\u4e3a\u51fa\u8272\u3002\u8be5\u6a21\u578b\u5177\u5907\u66f4\u9ad8\u5206\u8fa8\u7387\u7684\u89c6\u89c9\u80fd\u529b\uff08\u652f\u6301\u957f\u8fbe 2576 \u50cf\u7d20\u7684\u56fe\u50cf\uff09\uff0c\u5728\u4e13\u4e1a\u4efb\u52a1\u4e2d\u5c55\u73b0\u51fa\u66f4\u4f73\u7684\u5ba1\u7f8e\u4e0e\u521b\u9020\u529b\u3002Opus 4.7 \u65b0\u589e\u4e86 xhigh \u52aa\u529b\u7ea7\u522b\u9009\u9879\uff0c\u5e76\u5f15\u5165\u4e86\u4efb\u52a1\u9884\u7b97\u529f\u80fd\u4ee5\u63a7\u5236 token \u6d88\u8017\u3002\u5b9a\u4ef7\u7ef4\u6301\u4e0d\u53d8\uff1a\u6bcf\u767e\u4e07\u8f93\u5165 token 5 \u7f8e\u5143\uff0c\u8f93\u51fa token 25 \u7f8e\u5143\u3002\u8be5\u6a21\u578b\u5df2\u5168\u9762\u4e0a\u7ebf Claude \u4ea7\u54c1\u3001API \u53ca\u5404\u5927\u4e91\u5e73\u53f0\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic officially released Claude Opus 4.7, showing notable improvements over Opus 4.6 in advanced software engineering, particularly on the most difficult coding tasks. The model features enhanced vision capabilities supporting images up to 2,576 pixels on the long edge, and demonstrates better taste and creativity for professional tasks. Opus 4.7 introduces a new xhigh effort level and task budgets for token spend control. Pricing remains at $5 per million input tokens and $25 per million output tokens. The model is available across all Claude products, API, and major cloud platforms.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/news\/claude-opus-4-7\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Featured An update on recent Claude Code quality reports<\/strong>\uff08Anthropic Engineering\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u5de5\u7a0b\u56e2\u961f\u53d1\u5e03\u6280\u672f\u590d\u76d8\uff0c\u89e3\u91ca\u4e86\u8fc7\u53bb\u4e00\u4e2a\u6708 Claude Code \u8d28\u91cf\u4e0b\u964d\u7684\u4e09\u4e2a\u6839\u672c\u539f\u56e0\u3002\u7b2c\u4e00\uff0c3 \u6708 4 \u65e5\u5c06\u9ed8\u8ba4\u63a8\u7406\u52aa\u529b\u7ea7\u522b\u4ece high \u6539\u4e3a medium\uff0c\u5df2\u4e8e 4 \u6708 7 \u65e5\u56de\u6eda\uff1b\u7b2c\u4e8c\uff0c3 \u6708 26 \u65e5\u5b9e\u65bd\u7684\u7f13\u5b58\u4f18\u5316\u5b58\u5728 bug\uff0c\u5bfc\u81f4\u4f1a\u8bdd\u8d85\u8fc7\u4e00\u5c0f\u65f6\u95f2\u7f6e\u540e\u4f1a\u6301\u7eed\u6e05\u9664\u5386\u53f2\u63a8\u7406\u8bb0\u5f55\uff0c\u5df2\u4e8e 4 \u6708 10 \u65e5\u4fee\u590d\uff1b\u7b2c\u4e09\uff0c4 \u6708 16 \u65e5\u6dfb\u52a0\u7684\u51cf\u5c11\u5197\u957f\u56de\u590d\u7684\u7cfb\u7edf\u63d0\u793a\u610f\u5916\u635f\u5bb3\u4e86\u7f16\u7801\u8d28\u91cf\uff0c\u5df2\u4e8e 4 \u6708 20 \u65e5\u64a4\u9500\u3002Anthropic \u8868\u793a\u5c06\u52a0\u5f3a\u5185\u90e8\u6d4b\u8bd5\u6d41\u7a0b\uff0c\u5e76\u4e3a\u6240\u6709\u8ba2\u9605\u7528\u6237\u91cd\u7f6e\u4f7f\u7528\u989d\u5ea6\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic&#039;s engineering team published a postmortem explaining three root causes of recent Claude Code quality degradation. First, a March 4 change to default reasoning effort from high to medium was reverted on April 7. Second, a March 26 caching optimization bug caused continuous clearing of reasoning history for idle sessions, fixed on April 10. Third, an April 16 system prompt change to reduce verbosity inadvertently hurt coding quality and was reverted on April 20. Anthropic\u627f\u8bfa\u52a0\u5f3a\u5185\u90e8\u6d4b\u8bd5\u6d41\u7a0b\uff0c\u5e76\u4e3a\u6240\u6709\u8ba2\u9605\u7528\u6237\u91cd\u7f6e\u4f7f\u7528\u989d\u5ea6\u3002<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/engineering\/april-23-postmortem\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Scaling Managed Agents: Decoupling the brain from the hands<\/strong>\uff08Anthropic Engineering\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u5de5\u7a0b\u535a\u5ba2\u6df1\u5165\u4ecb\u7ecd\u4e86 Managed Agents \u7684\u67b6\u6784\u8bbe\u8ba1\u7406\u5ff5\uff0c\u6838\u5fc3\u601d\u60f3\u662f\u5c06&quot;\u5927\u8111&quot;\uff08Claude \u53ca\u5176 harness\uff09\u4e0e&quot;\u53cc\u624b&quot;\uff08\u6c99\u76d2\u548c\u6267\u884c\u5de5\u5177\uff09\u4ee5\u53ca&quot;\u4f1a\u8bdd&quot;\uff08\u4e8b\u4ef6\u65e5\u5fd7\uff09\u89e3\u8026\u3002\u8fd9\u79cd\u8bbe\u8ba1\u501f\u9274\u4e86\u64cd\u4f5c\u7cfb\u7edf\u865a\u62df\u5316\u786c\u4ef6\u7684\u62bd\u8c61\u6a21\u5f0f\uff0c\u4f7f\u5404\u7ec4\u4ef6\u53ef\u4ee5\u72ec\u7acb\u5931\u8d25\u548c\u66ff\u6362\u3002\u901a\u8fc7\u5c06 harness \u79fb\u51fa\u5bb9\u5668\uff0c\u7cfb\u7edf\u5b9e\u73b0\u4e86 60% \u7684 p50 \u9996 token \u65f6\u95f4\u964d\u4f4e\u548c 90% \u7684 p95 \u964d\u4f4e\u3002\u6587\u7ae0\u8fd8\u8ba8\u8bba\u4e86\u5b89\u5168\u8fb9\u754c\u8bbe\u8ba1\u3001\u4f1a\u8bdd\u4f5c\u4e3a\u4e0a\u4e0b\u6587\u5bf9\u8c61\u7684\u7ba1\u7406\uff0c\u4ee5\u53ca\u652f\u6301\u591a\u5927\u8111\u548c\u591a\u624b\u7684\u6269\u5c55\u80fd\u529b\u3002<\/p>\n<p><strong>English Summary:<\/strong> Anthropic&#039;s engineering blog detailed the architecture design of Managed Agents, centering on decoupling the &quot;brain&quot; (Claude and its harness) from the &quot;hands&quot; (sandboxes and tools) and the &quot;session&quot; (event log). Inspired by OS virtualization abstractions, this design allows components to fail and be replaced independently. Moving the harness out of containers achieved roughly 60% p50 and over 90% p95 time-to-first-token reductions.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/engineering\/managed-agents\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Improving token efficiency in GitHub Agentic Workflows<\/strong>\uff08GitHub AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>GitHub \u5206\u4eab\u4e86\u4f18\u5316 Agentic Workflows token \u6548\u7387\u7684\u5b9e\u8df5\u7ecf\u9a8c\u3002\u56e2\u961f\u901a\u8fc7 API \u4ee3\u7406\u7edf\u4e00\u91c7\u96c6 token \u4f7f\u7528\u6570\u636e\uff0c\u5e76\u6784\u5efa\u4e86\u6bcf\u65e5\u5ba1\u8ba1\u548c\u4f18\u5316\u5de5\u4f5c\u6d41\u3002\u4e3b\u8981\u4f18\u5316\u7b56\u7565\u5305\u62ec\uff1a\u79fb\u9664\u672a\u4f7f\u7528\u7684 MCP \u5de5\u5177\uff08\u53ef\u51cf\u5c11\u6bcf\u8f6e 8-12 KB \u4e0a\u4e0b\u6587\uff09\u3001\u7528 GitHub CLI \u66ff\u4ee3 MCP \u8c03\u7528\u8fdb\u884c\u6570\u636e\u83b7\u53d6\u3001\u4ee5\u53ca\u5c06\u786e\u5b9a\u6027\u6570\u636e\u6536\u96c6\u79fb\u81f3 agent \u542f\u52a8\u524d\u7684\u9884\u6267\u884c\u6b65\u9aa4\u3002GitHub \u63d0\u51fa\u4e86&quot;\u6709\u6548 token (ET)&quot;\u6307\u6807\u6765\u6807\u51c6\u5316\u4e0d\u540c\u6a21\u578b\u7684\u6210\u672c\u6bd4\u8f83\u3002\u4f18\u5316\u540e\u7684\u5de5\u4f5c\u6d41\u663e\u793a\u663e\u8457\u6548\u679c\uff1aAuto-Triage Issues \u8282\u7701 62%\uff0cSecurity Guard \u8282\u7701 43%\uff0cSmoke Claude \u8282\u7701 59%\u3002<\/p>\n<p><strong>English Summary:<\/strong> GitHub shared practical experience in optimizing token efficiency for Agentic Workflows. The team standardized token usage collection via an API proxy and built daily auditing and optimization workflows. Key strategies include removing unused MCP tools (saving 8-12 KB per turn), replacing MCP calls with GitHub CLI for data fetching, and moving deterministic data gathering to pre-agent steps. GitHub introduced an &quot;Effective Tokens (ET)&quot; metric to normalize costs across models.<\/p>\n<p><a href=\"https:\/\/github.blog\/ai-and-ml\/github-copilot\/improving-token-efficiency-in-github-agentic-workflows\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>OpenAI launches new voice intelligence features in its API<\/strong>\uff08TechCrunch AI\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>OpenAI \u5728\u5176 API \u4e2d\u63a8\u51fa\u591a\u9879\u8bed\u97f3\u667a\u80fd\u65b0\u529f\u80fd\uff0c\u5305\u62ec GPT-Realtime-2\u3001GPT-Realtime-Translate \u548c GPT-Realtime-Whisper\u3002GPT-Realtime-2 \u57fa\u4e8e GPT-5 \u7ea7\u63a8\u7406\u80fd\u529b\uff0c\u53ef\u5904\u7406\u66f4\u590d\u6742\u7684\u7528\u6237\u8bf7\u6c42\uff1bGPT-Realtime-Translate \u652f\u6301 70 \u591a\u79cd\u8f93\u5165\u8bed\u8a00\u548c 13 \u79cd\u8f93\u51fa\u8bed\u8a00\u7684\u5b9e\u65f6\u7ffb\u8bd1\uff1bWhisper \u5219\u63d0\u4f9b\u5b9e\u65f6\u8bed\u97f3\u8f6c\u6587\u672c\u529f\u80fd\u3002\u8fd9\u4e9b\u529f\u80fd\u9762\u5411\u5ba2\u6237\u670d\u52a1\u3001\u6559\u80b2\u3001\u5a92\u4f53\u548c\u521b\u4f5c\u8005\u5e73\u53f0\u7b49\u573a\u666f\uff0cOpenAI \u8fd8\u8bbe\u7f6e\u4e86\u5185\u5bb9\u5b89\u5168\u62a4\u680f\u4ee5\u9632\u6b62\u6ee5\u7528\u3002<\/p>\n<p><strong>English Summary:<\/strong> OpenAI launched new voice intelligence features in its API including GPT-Realtime-2 (with GPT-5-class reasoning), GPT-Realtime-Translate (supporting 70+ input and 13 output languages), and GPT-Realtime-Whisper for live speech-to-text. These capabilities target customer service, education, media, and creator platforms, with built-in guardrails to prevent abuse.<\/p>\n<p><a href=\"https:\/\/techcrunch.com\/2026\/05\/07\/openai-launches-new-voice-intelligence-features-in-its-api\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Agent pull requests are everywhere. Here\u2019s how to review them.<\/strong>\uff08GitHub AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>GitHub \u53d1\u5e03\u5173\u4e8e\u5982\u4f55\u5ba1\u67e5 AI Agent \u751f\u6210\u4ee3\u7801\u7684\u5b9e\u7528\u6307\u5357\u3002\u7814\u7a76\u8868\u660e\uff0cAgent \u751f\u6210\u7684\u4ee3\u7801\u6bd4\u4eba\u5de5\u4ee3\u7801\u5f15\u5165\u66f4\u591a\u5197\u4f59\u548c\u6280\u672f\u503a\u52a1\u3002\u6587\u7ae0\u6307\u51fa GitHub \u4e0a\u8d85\u8fc7\u4e94\u5206\u4e4b\u4e00\u7684\u4ee3\u7801\u5ba1\u67e5\u6d89\u53ca Agent\uff0cCopilot \u4ee3\u7801\u5ba1\u67e5\u5df2\u5904\u7406\u8d85 6000 \u4e07\u6b21\u3002\u6307\u5357\u5efa\u8bae\u5ba1\u67e5\u8005\u5173\u6ce8\u4e94\u5927\u98ce\u9669\uff1aCI \u914d\u7f6e\u88ab\u524a\u5f31\u3001\u4ee3\u7801\u91cd\u590d\u3001\u5e7b\u89c9\u5f0f\u6b63\u786e\u6027\u3001Agent \u54cd\u5e94\u5931\u8054\uff0c\u4ee5\u53ca\u5de5\u4f5c\u6d41\u4e2d\u7684\u4e0d\u53ef\u4fe1\u8f93\u5165\u3002\u5efa\u8bae\u5148\u8ba9 Copilot \u81ea\u52a8\u5ba1\u67e5\uff0c\u4eba\u5de5\u4e13\u6ce8\u4e8e\u5224\u65ad\u6027\u5de5\u4f5c\u3002<\/p>\n<p><strong>English Summary:<\/strong> GitHub published a practical guide on reviewing AI agent-generated pull requests. Research shows agent code introduces more redundancy and technical debt than human-written code. With over 20% of GitHub reviews now involving agents, the guide highlights five red flags: CI gaming, code reuse blindness, hallucinated correctness, agentic ghosting, and untrusted input in workflows. It recommends letting Copilot handle mechanical checks first while humans focus on judgment.<\/p>\n<p><a href=\"https:\/\/github.blog\/ai-and-ml\/generative-ai\/agent-pull-requests-are-everywhere-heres-how-to-review-them\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Secure short-term GPU capacity for ML workloads with EC2 Capacity Blocks for ML and SageMaker training plans<\/strong>\uff08AWS ML Blog\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>AWS \u535a\u5ba2\u4ecb\u7ecd\u5982\u4f55\u901a\u8fc7 EC2 Capacity Blocks for ML \u548c SageMaker Training Plans \u4e3a\u77ed\u671f ML \u5de5\u4f5c\u8d1f\u8f7d\u9884\u7559 GPU \u5bb9\u91cf\u3002\u9762\u5bf9 GPU \u4f9b\u5e94\u7d27\u5f20\uff0cCapacity Blocks \u5141\u8bb8\u63d0\u524d\u6700\u591a 8 \u5468\u9884\u8ba2 1-182 \u5929\u7684 GPU \u5bb9\u91cf\uff0c\u4ef7\u683c\u6bd4\u6309\u9700\u4f4e 40-50%\uff1bSageMaker Training Plans \u5219\u4e3a\u6258\u7ba1\u73af\u5883\u63d0\u4f9b\u9884\u7559\u5bb9\u91cf\uff0c\u4ef7\u683c\u6bd4\u6309\u9700\u4f4e 70-75%\u3002\u6587\u7ae0\u63d0\u4f9b\u4e86\u51b3\u7b56\u6d41\u7a0b\u56fe\uff0c\u5e2e\u52a9\u7528\u6237\u6839\u636e\u5de5\u4f5c\u8d1f\u8f7d\u7c7b\u578b\u3001\u53ef\u7528\u6027\u9700\u6c42\u548c\u6210\u672c\u6a21\u578b\u9009\u62e9\u5408\u9002\u65b9\u6848\uff0c\u5e76\u5305\u542b\u8be6\u7ec6\u7684 CLI \u914d\u7f6e\u793a\u4f8b\u3002<\/p>\n<p><strong>English Summary:<\/strong> AWS blog explains how to secure short-term GPU capacity using EC2 Capacity Blocks for ML and SageMaker Training Plans. Capacity Blocks allow reserving GPU capacity 1-182 days up to 8 weeks in advance with 40-50% discount; SageMaker Training Plans offer 70-75% below on-demand rates for managed workloads. The post provides a decision framework and CLI examples for reserving inference capacity.<\/p>\n<p><a href=\"https:\/\/aws.amazon.com\/blogs\/machine-learning\/secure-short-term-gpu-capacity-for-ml-workloads-with-ec2-capacity-blocks-for-ml-and-sagemaker-training-plans\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Notes from inside China&#039;s AI labs<\/strong>\uff08Interconnects\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Interconnects \u535a\u5ba2\u4f5c\u8005\u5206\u4eab\u8bbf\u95ee\u4e2d\u56fd\u4e3b\u8981 AI \u5b9e\u9a8c\u5ba4\u7684\u89c1\u95fb\u3002\u6587\u7ae0\u6307\u51fa\u4e2d\u56fd\u7814\u7a76\u4eba\u5458\u5728\u5de5\u7a0b\u5b9e\u73b0\u548c\u5feb\u901f\u8ddf\u8fdb\u65b9\u9762\u5177\u6709\u6587\u5316\u4f18\u52bf\uff1a\u66f4\u613f\u610f\u505a\u975e flashy \u7684\u57fa\u7840\u5de5\u4f5c\u3001\u8f83\u5c11 ego \u51b2\u7a81\u3001\u5927\u91cf\u5e74\u8f7b\u5b66\u751f\u76f4\u63a5\u53c2\u4e0e\u6838\u5fc3\u5f00\u53d1\u3002\u4e2d\u56fd AI \u751f\u6001\u66f4\u50cf\u534f\u4f5c\u7f51\u7edc\u800c\u975e\u5bf9\u6297\u90e8\u843d\uff0c\u5404\u5b9e\u9a8c\u5ba4\u666e\u904d\u5c0a\u91cd DeepSeek \u7684\u6280\u672f\u54c1\u5473\u548c\u5b57\u8282\u8df3\u52a8\u7684\u5e02\u573a\u5730\u4f4d\u3002\u4e0e\u897f\u65b9\u4e0d\u540c\uff0c\u4e2d\u56fd\u516c\u53f8\u666e\u904d\u503e\u5411\u4e8e\u81ea\u5efa\u6a21\u578b\u800c\u975e\u8d2d\u4e70\u670d\u52a1\uff0c\u53cd\u6620\u51fa\u6280\u672f\u81ea\u4e3b\u7684\u6df1\u5c42\u6587\u5316\u3002<\/p>\n<p><strong>English Summary:<\/strong> Interconnects blog shares insights from visiting leading Chinese AI labs. Chinese researchers excel at execution and fast-following due to cultural factors: willingness to do unglamorous work, less ego-driven conflict, and heavy student involvement in core development. The Chinese AI ecosystem operates more collaboratively than competitively, with universal respect for DeepSeek&#039;s technical taste and ByteDance&#039;s market position.<\/p>\n<p><a href=\"https:\/\/www.interconnects.ai\/p\/notes-from-inside-chinas-ai-labs\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>OpenAI Introduces Websocket-Based Execution Mode to Reduce Latency in Agentic Workflows<\/strong>\uff08InfoQ AI\/ML\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>OpenAI \u4e3a\u5176 Responses API \u63a8\u51fa\u57fa\u4e8e WebSocket \u7684\u6267\u884c\u6a21\u5f0f\uff0c\u4ee5\u964d\u4f4e Agentic \u5de5\u4f5c\u6d41\u7684\u5ef6\u8fdf\u3002\u4f20\u7edf HTTP \u8bf7\u6c42-\u54cd\u5e94\u6a21\u5f0f\u5728\u591a\u6b65\u63a8\u7406\u4e2d\u9700\u8981\u91cd\u590d\u5efa\u7acb\u8fde\u63a5\uff0c\u800c WebSocket \u6301\u4e45\u53cc\u5411\u8fde\u63a5\u53ef\u51cf\u5c11\u9ad8\u8fbe 40% \u7684\u5ef6\u8fdf\uff0c\u5e76\u652f\u6301\u6bcf\u79d2 1000 \u6b21\u4e8b\u52a1\u7684\u6301\u7eed\u541e\u5410\u548c\u6700\u9ad8 4000 TPS \u7684\u7a81\u53d1\u6d41\u91cf\u3002Vercel\u3001Cline \u548c Cursor \u7b49\u5e73\u53f0\u5df2\u96c6\u6210\u8be5\u529f\u80fd\uff0c\u5206\u522b\u62a5\u544a 40%\u300139% \u548c 30% \u7684\u5ef6\u8fdf\u6539\u5584\u3002\u8be5\u6a21\u5f0f\u652f\u6301 ZDR\uff08\u96f6\u6570\u636e\u4fdd\u7559\uff09\uff0c\u9002\u7528\u4e8e\u7f16\u7801 Agent \u548c\u5b9e\u65f6 AI \u7cfb\u7edf\u3002<\/p>\n<p><strong>English Summary:<\/strong> OpenAI introduced a WebSocket-based execution mode for its Responses API to reduce latency in agentic workflows. Replacing traditional HTTP request-response patterns with persistent bidirectional connections, the new mode achieves up to 40% latency reduction with sustained throughput of ~1,000 TPS and burst capacity up to 4,000 TPS. Platforms like Vercel, Cline, and Cursor have integrated it, reporting 40%, 39%, and 30% latency improvements respectively. The feature is ZDR-compatible and targets coding agents and real-time AI systems.<\/p>\n<p><a href=\"https:\/\/www.infoq.com\/news\/2026\/05\/openai-websocket-responses-api\/?utm_campaign=infoq_content&#038;utm_source=infoq&#038;utm_medium=feed&#038;utm_term=AI%2C+ML+%26+Data+Engineering\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Scaling Trusted Access for Cyber with GPT-5.5 and GPT-5.5-Cyber<\/strong>\uff08OpenAI News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>OpenAI \u5ba3\u5e03\u6269\u5c55 Trusted Access for Cyber\uff08TAC\uff09\u8ba1\u5212\uff0c\u63a8\u51fa GPT-5.5 \u548c GPT-5.5-Cyber \u4e24\u6b3e\u6a21\u578b\uff0c\u4e13\u4e3a\u7f51\u7edc\u5b89\u5168\u9632\u5fa1\u8005\u8bbe\u8ba1\u3002TAC \u662f\u4e00\u4e2a\u57fa\u4e8e\u8eab\u4efd\u9a8c\u8bc1\u7684\u4fe1\u4efb\u6846\u67b6\uff0c\u901a\u8fc7\u5206\u7ea7\u8bbf\u95ee\u673a\u5236\u4e3a\u7ecf\u5ba1\u6838\u7684\u9632\u5fa1\u4eba\u5458\u63d0\u4f9b\u4e0d\u540c\u7a0b\u5ea6\u7684\u6a21\u578b\u80fd\u529b\uff1a\u6807\u51c6\u7248 GPT-5.5 \u9002\u7528\u4e8e\u4e00\u822c\u9632\u5fa1\u5de5\u4f5c\uff0cTAC \u7248\u672c\u53ef\u964d\u4f4e\u5206\u7c7b\u5668\u62d2\u7edd\u7387\u4ee5\u652f\u6301\u6f0f\u6d1e\u8bc6\u522b\u3001\u6076\u610f\u8f6f\u4ef6\u5206\u6790\u7b49\u4efb\u52a1\uff0c\u800c GPT-5.5-Cyber \u5219\u9762\u5411\u6388\u6743\u7ea2\u961f\u548c\u6e17\u900f\u6d4b\u8bd5\u7b49\u7279\u6b8a\u573a\u666f\u3002OpenAI \u5df2\u4e0e Cisco\u3001CrowdStrike\u3001Palo Alto Networks \u7b49\u5b89\u5168\u5382\u5546\u5408\u4f5c\uff0c\u6784\u5efa\u4ece\u6f0f\u6d1e\u7814\u7a76\u5230\u7f51\u7edc\u9632\u62a4\u7684\u5b8c\u6574\u5b89\u5168\u98de\u8f6e\uff0c\u540c\u65f6\u63a8\u51fa Codex Security \u5de5\u5177\u5e2e\u52a9\u5f00\u6e90\u9879\u76ee\u81ea\u52a8\u53d1\u73b0\u548c\u4fee\u590d\u6f0f\u6d1e\u3002<\/p>\n<p><strong>English Summary:<\/strong> OpenAI expands its Trusted Access for Cyber (TAC) program with GPT-5.5 and GPT-5.5-Cyber, designed specifically for cybersecurity defenders. The identity-based trust framework offers tiered access: standard GPT-5.5 for general use, TAC-enabled version with reduced refusals for defensive workflows like vulnerability triage and malware analysis, and GPT-5.5-Cyber for specialized authorized activities such as red teaming. OpenAI partners with major security vendors including Cisco, CrowdStrike, and Palo Alto Networks to build a security flywheel spanning vulnerability research to network protection, while also launching Codex Security to help open-source projects automatically identify and remediate vulnerabilities.<\/p>\n<p><a href=\"https:\/\/openai.com\/index\/gpt-5-5-with-trusted-access-for-cyber\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Parloa builds service agents customers want to talk to<\/strong>\uff08OpenAI News\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>\u67cf\u6797\u521d\u521b\u516c\u53f8 Parloa \u501f\u52a9 OpenAI \u6a21\u578b\u6784\u5efa\u4f01\u4e1a\u7ea7 AI \u5ba2\u670d\u4ee3\u7406\u7ba1\u7406\u5e73\u53f0\uff08AMP\uff09\uff0c\u652f\u6301\u65e0\u4ee3\u7801\u65b9\u5f0f\u8bbe\u8ba1\u3001\u6a21\u62df\u548c\u90e8\u7f72\u8bed\u97f3\u9a71\u52a8\u7684\u5ba2\u6237\u670d\u52a1\u7cfb\u7edf\u3002\u8be5\u5e73\u53f0\u5141\u8bb8\u4e1a\u52a1\u4e13\u5bb6\u901a\u8fc7\u81ea\u7136\u8bed\u8a00\u5b9a\u4e49\u4ee3\u7406\u884c\u4e3a\uff0c\u4f7f\u7528 GPT-5.4 \u7b49\u6a21\u578b\u8fdb\u884c\u5bf9\u8bdd\u6a21\u62df\u548c\u8bc4\u4f30\uff0c\u5b9e\u73b0\u4e0a\u7ebf\u524d\u7684\u5145\u5206\u6d4b\u8bd5\u3002Parloa \u91c7\u7528\u6a21\u5757\u5316\u5b50\u4ee3\u7406\u67b6\u6784\u548c\u786e\u5b9a\u6027\u63a7\u5236\u76f8\u7ed3\u5408\u7684\u8bbe\u8ba1\uff0c\u5728\u4fdd\u6301\u5bf9\u8bdd\u7075\u6d3b\u6027\u7684\u540c\u65f6\u786e\u4fdd\u5173\u952e\u6b65\u9aa4\u7684\u53ef\u9760\u6267\u884c\u3002\u76ee\u524d\u8be5\u5e73\u53f0\u5df2\u670d\u52a1\u96f6\u552e\u3001\u65c5\u6e38\u3001\u4fdd\u9669\u7b49\u884c\u4e1a\uff0c\u67d0\u5168\u7403\u65c5\u6e38\u516c\u53f8\u90e8\u7f72\u540e\u4eba\u5de5\u8f6c\u63a5\u8bf7\u6c42\u51cf\u5c11 80%\uff0c\u5c55\u73b0\u4e86\u4f01\u4e1a\u7ea7 AI \u5ba2\u670d\u7684\u53ef\u884c\u6027\u4e0e\u89c4\u6a21\u5316\u6f5c\u529b\u3002<\/p>\n<p><strong>English Summary:<\/strong> Berlin-based startup Parloa leverages OpenAI models to build AMP, an enterprise AI Agent Management Platform for voice-driven customer service. The no-code platform enables business experts to define agent behavior in natural language, simulate conversations using models like GPT-5.4, and evaluate performance before deployment. Parloa employs a modular sub-agent architecture combined with deterministic controls to balance conversational flexibility with reliable execution. Currently serving industries including retail, travel, and insurance, the platform helped one global travel company reduce human agent requests by 80%, demonstrating the viability and scalability of enterprise AI customer service.<\/p>\n<p><a href=\"https:\/\/openai.com\/index\/parloa\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>[AINews] Anthropic-SpaceXai&#039;s 300MW\/$5B\/yr deal for Colossus I, ARR growth is 8000% annualized<\/strong>\uff08Latent Space\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Anthropic \u5728\u7b2c\u4e8c\u5c4a\u5e74\u5ea6\u5f00\u53d1\u8005\u5927\u4f1a\u4e0a\u5ba3\u5e03\u4e0e SpaceX\/xAI \u8fbe\u6210\u91cd\u5927\u7b97\u529b\u5408\u4f5c\uff0c\u5c06\u83b7\u5f97 Colossus 1 \u8d85\u7ea7\u8ba1\u7b97\u96c6\u7fa4\u8d85\u8fc7 300 \u5146\u74e6\u7684\u7b97\u529b\u652f\u6301\uff0c\u6d89\u53ca\u7ea6 22 \u4e07\u5757 NVIDIA GPU\uff0c\u9884\u8ba1\u5e74\u6210\u672c\u7ea6 50 \u4ebf\u7f8e\u5143\u3002\u4f5c\u4e3a\u76f4\u63a5\u7ed3\u679c\uff0cClaude Code \u7684 5 \u5c0f\u65f6\u901f\u7387\u9650\u5236\u7acb\u5373\u7ffb\u500d\uff0cPro \u548c Max \u7528\u6237\u7684\u5cf0\u503c\u65f6\u6bb5\u9650\u5236\u88ab\u53d6\u6d88\uff0cOpus API \u901f\u7387\u9650\u5236\u4e5f\u5927\u5e45\u63d0\u5347\u3002Anthropic CEO Dario Amodei \u900f\u9732\u516c\u53f8\u5e74\u5316\u7ecf\u5e38\u6027\u6536\u5165\u589e\u957f\u8fbe 80 \u500d\uff0c\u5e76\u9884\u6d4b 2026 \u5e74\u5c06\u51fa\u73b0\u5355\u4eba\u5341\u4ebf\u7f8e\u5143\u516c\u53f8\u3002\u5927\u4f1a\u8fd8\u53d1\u5e03\u4e86 Claude Managed Agents \u7684\u4e09\u9879\u65b0\u529f\u80fd\uff1a\u8bb0\u5fc6\u529f\u80fd Dreaming\u3001\u8bc4\u4f30\u6846\u67b6 Outcomes \u548c\u4ee3\u7406\u7f16\u6392\u80fd\u529b\u3002<\/p>\n<p><strong>English Summary:<\/strong> At its second annual developer event, Anthropic announced a major compute partnership with SpaceX\/xAI, securing over 300MW of capacity from the Colossus 1 supercluster with approximately 220,000 NVIDIA GPUs, estimated at $5 billion annually. As an immediate result, Claude Code&#039;s 5-hour rate limits are doubled, peak-hour restrictions removed for Pro\/Max users, and Opus API limits substantially increased. CEO Dario Amodei revealed 80x annualized ARR growth and predicted 2026 will see a one-person billion-dollar company. The event also introduced three new Claude Managed Agents features: Dreaming (memory), Outcomes (evaluation framework), and agent orchestration capabilities.<\/p>\n<p><a href=\"https:\/\/www.latent.space\/p\/ainews-anthropic-spacexais-300mw5byr\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>[AINews] Silicon Valley gets Serious about Services<\/strong>\uff08Latent Space\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>\u7845\u8c37\u5934\u90e8 AI \u5b9e\u9a8c\u5ba4\u6b63\u52a0\u901f\u5e03\u5c40\u670d\u52a1\u4e1a\u52a1\uff0c\u6807\u5fd7\u7740 AI \u884c\u4e1a\u4ece\u6a21\u578b\u7ade\u4e89\u5411\u4f01\u4e1a\u843d\u5730\u670d\u52a1\u8f6c\u578b\u3002Anthropic \u4e0e Blackstone\u3001Hellman &amp; Friedman \u53ca Goldman Sachs \u6210\u7acb\u5408\u8d44\u4f01\u4e1a\uff0c\u6295\u5165 15 \u4ebf\u7f8e\u5143\u4e3a\u4f01\u4e1a\u5ba2\u6237\u5b9a\u5236 Claude \u9a71\u52a8\u7684 AI \u7cfb\u7edf\uff1bOpenAI \u5219\u6210\u7acb The Deployment Company\uff0c\u7531 COO Brad Lightcap \u9886\u5bfc\uff0c\u5df2\u83b7\u5f97\u7ea6 40 \u4ebf\u7f8e\u5143\u878d\u8d44\uff0c\u4f30\u503c\u8fbe 100 \u4ebf\u7f8e\u5143\uff0c\u4e13\u6ce8\u901a\u8fc7\u79c1\u52df\u80a1\u6743\u6e20\u9053\u5411\u4f01\u4e1a\u9500\u552e\u8f6f\u4ef6\u3002\u4e0e\u6b64\u540c\u65f6\uff0cPerplexity \u63a8\u51fa\u9762\u5411\u4e13\u4e1a\u91d1\u878d\u7684 Computer \u4ea7\u54c1\uff0cAnthropic \u4e3e\u529e\u91d1\u878d\u670d\u52a1\u4e13\u573a\u6d3b\u52a8\u3002\u884c\u4e1a\u89c2\u5bdf\u6307\u51fa\uff0c\u968f\u7740 AI \u4ee3\u7406\u8fdb\u5165\u77e5\u8bc6\u5de5\u4f5c\u9886\u57df\uff0cIT \u7cfb\u7edf\u5347\u7ea7\u3001\u5de5\u4f5c\u6d41\u73b0\u4ee3\u5316\u3001\u4eba\u673a\u534f\u4f5c\u8bbe\u8ba1\u7b49\u670d\u52a1\u9700\u6c42\u6fc0\u589e\uff0c\u521b\u9020\u4e86\u5927\u91cf\u65b0\u673a\u4f1a\u3002<\/p>\n<p><strong>English Summary:<\/strong> Leading Silicon Valley AI labs are aggressively expanding into services, signaling an industry shift from model competition to enterprise deployment. Anthropic formed a joint venture with Blackstone, Hellman &amp; Friedman, and Goldman Sachs, investing $1.5 billion to build customized Claude-powered systems for enterprise clients. OpenAI launched The Deployment Company, led by COO Brad Lightcap, which has raised approximately $4 billion at a $10 billion valuation to sell software through private equity channels. Meanwhile, Perplexity introduced its Professional Finance Computer product, and Anthropic held a financial services event. Industry observers note that as AI agents enter knowledge work, demand for IT upgrades, workflow modernization, and human-agent collaboration design is surging, creating significant new opportunities.<\/p>\n<p><a href=\"https:\/\/www.latent.space\/p\/ainews-silicon-valley-gets-serious\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Ollama is now powered by MLX on Apple Silicon in preview<\/strong>\uff08Ollama Blog\uff09<\/p>\n<p><strong>\u4e2d\u6587\u6458\u8981\uff1a<\/strong>Ollama \u53d1\u5e03\u57fa\u4e8e Apple MLX \u6846\u67b6\u7684\u9884\u89c8\u7248\u672c\uff0c\u6210\u4e3a Apple Silicon \u4e0a\u8fd0\u884c\u672c\u5730\u5927\u8bed\u8a00\u6a21\u578b\u7684\u6700\u5feb\u65b9\u6848\u3002\u65b0\u7248\u672c\u5145\u5206\u5229\u7528\u82f9\u679c\u7edf\u4e00\u5185\u5b58\u67b6\u6784\uff0c\u5728 M5 \u7cfb\u5217\u82af\u7247\u4e0a\u501f\u52a9 GPU \u795e\u7ecf\u52a0\u901f\u5668\u663e\u8457\u63d0\u5347\u9996 token \u65f6\u95f4\u548c\u751f\u6210\u901f\u5ea6\u3002\u6d4b\u8bd5\u663e\u793a\uff0cQwen3.5-35B-A3B \u6a21\u578b\u5728 NVFP4 \u91cf\u5316\u4e0b\u9884\u586b\u5145\u901f\u5ea6\u8fbe 1851 token\/s\uff0c\u89e3\u7801\u901f\u5ea6\u8fbe 134 token\/s\u3002Ollama 0.19 \u8fd8\u652f\u6301 NVIDIA NVFP4 \u683c\u5f0f\u4ee5\u4fdd\u6301\u4e0e\u751f\u4ea7\u73af\u5883\u7684\u4e00\u81f4\u6027\uff0c\u5e76\u6539\u8fdb\u4e86\u7f13\u5b58\u673a\u5236\uff0c\u5b9e\u73b0\u8de8\u5bf9\u8bdd\u7f13\u5b58\u590d\u7528\u3001\u667a\u80fd\u68c0\u67e5\u70b9\u548c\u66f4\u667a\u80fd\u7684\u6dd8\u6c70\u7b56\u7565\uff0c\u7279\u522b\u9002\u5408 Claude Code\u3001OpenClaw \u7b49\u7f16\u7801\u4ee3\u7406\u573a\u666f\u3002\u8be5\u7248\u672c\u8981\u6c42 Mac \u914d\u5907\u8d85\u8fc7 32GB \u7edf\u4e00\u5185\u5b58\u3002<\/p>\n<p><strong>English Summary:<\/strong> Ollama released a preview version powered by Apple&#039;s MLX framework, becoming the fastest way to run local LLMs on Apple Silicon. The new version leverages Apple&#039;s unified memory architecture and GPU Neural Accelerators on M5 series chips to significantly improve time-to-first-token and generation speed. Benchmarks show Qwen3.5-35B-A3B with NVFP4 quantization achieves 1851 tokens\/s prefill and 134 tokens\/s decode. Ollama 0.19 also adds NVIDIA NVFP4 support for production parity and enhanced caching with cross-conversation reuse, intelligent checkpoints, and smarter eviction\u2014particularly beneficial for coding agents like Claude Code and OpenClaw. The release requires Macs with over 32GB unified memory.<\/p>\n<p><a href=\"https:\/\/ollama.com\/blog\/mlx\" target=\"_blank\" rel=\"noopener noreferrer\">\u539f\u6587\u94fe\u63a5<\/a><\/p>\n<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>\u65e5\u671f\uff1a2026-05-08 \u672c\u671f\u805a\u7126\uff1a\u91cd\u70b9\u5173\u6ce8\u6a21\u578b\u53d1\u5e03\u4e0e release notes\u3001\u5b98\u65b9 engineeri [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[],"class_list":["post-385","post","type-post","status-publish","format-standard","hentry","category-ai-daily"],"_links":{"self":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts\/385","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=385"}],"version-history":[{"count":0,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=\/wp\/v2\/posts\/385\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=385"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=385"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.faiyi.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=385"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}