12/3/2025 分享

AI Concern - Sam Altman

Sam Altman on Fox News just admitted what most tech CEOs won’t say out loud: AI is already diagnosing diseases doctors couldn’t Kids are getting free world class tutors Small businesses are punching way above their weight But the downside he finally acknowledged: Millions of jobs will vanish forever State level cyberweapons are now trivial to build If China or others leapfrog the US, it’s a national security disaster His words: “Some jobs will go away entirely… adversaries getting ahold of these would be a national security issue.” Rare, unfiltered honesty from the man steering it all.

12/2/2025 分享

AI concern

Google CEO Sundar Pichai just told Fox News the one AI nightmare that actually keeps him awake: “When deepfakes get so good that we literally won’t be able to tell what’s real anymore… and bad actors get their hands on it.” His exact words after Shannon Bream pressed him: “That’s the kind of thing you sit and think about.” He still believes humanity can steer it toward curing cancer — but the clock is ticking. This 64 second clip is chilling.

10/8/2025 分享

Samsung Recursive Model

A tiny 7 Million parameter model just beat DeepSeek R1, Gemini 2.5 pro, and o3 mini at reasoning on both ARG AGI 1 and ARC AGI 2. It's called Tiny Recursive Model (TRM) from Samsung. How can a model 10,000x smaller be smarter? Here's how it works: 1. Draft an Initial Answer: Unlike an LLM that writes word by word, TRM first generates a quick, complete "draft" of the solution. Think of this as its first rough guess. 2. Create a "Scratchpad": It then creates a separate space for its internal thoughts, a latent reasoning "scratchpad." This is where the real magic happens. 3. Intensely Self Critique: The model enters an intense inner loop. It compares its draft answer to the original problem and refines its reasoning on the scratchpad over and over (6 times in a row), asking itself, "Does my logic hold up? Where are the errors?" 4. Revise the Answer: After this focused "thinking," it uses the improved logic from its scratchpad to create a brand new, much better draft of the final answer. 5. Repeat until Confident: The entire process, draft, think, revise, is repeated up to 16 times. Each cycle pushes the model closer to a correct, logically sound solution. Why this matters: Business Leaders: This is what algorithmic advantage looks like. While competitors are paying massive inference costs for brute force scale, a smarter, more efficient model can deliver superior performance for a tiny fraction of the cost. Researchers: This is a major validation for neuro symbolic ideas. The model's ability to recursively "think" before "acting" demonstrates that architecture, not just scale, can be a primary driver of reasoning ability. Practitioners: SOTA reasoning is no longer gated behind billion dollar GPU clusters. This paper provides a highly efficient, parameter light blueprint for building specialized reasoners that can run anywhere. This isn't just scaling down; it's a completely different, more deliberate way of solving problems.

10/7/2025 随想

Kobe 81 pts

Kobe Bryant 81 pts 28/46 FG

10/6/2025 随想

The Sad Reality of Rust Adoption

🦀 The Sad Reality of Rust Adoption If you’ve been hanging around developer socials, then you’ve obviously heard the heated debate recently about Rust’s adoption by established software, especially in Coreutils and Git. Some devs are thrilled, while others are not so happy about the move especially in battle tested projects. The internet has, naturally, turned it into a meme war.

10/6/2025 分享

「Google is sepcial case」

Google is a special case and took the time to highlight them specifically He says that being a "China Hawk" has been seen as some kind of badge of honour but in fact it is a "badge of shame" He dispels the notion that China is decades or years behind and in fact states they are "nanoseconds behind". I have been very doubtful about OpenAi's progress in coming years and seen bubbly but Jensen is convinced they are going to be a hyperscale company, $1T in time.