You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
swiglu_fused now uses exact expf by default (was NEON Schraudolph).
R27-29 had fixed DeltaNet but MoE kept fast_exp — compounds over
30 MoE layers × 500+ tokens, contributing to Qwen3.6-35B long-
context degradation.
Opt-out: TQ_MOE_FAST_EXP=1 restores Schraudolph NEON path.
A/B Qwen3.6-35B at 400w:
fast: "most AI/ML (AI/ML) is a powerful tool for large-scale
data processing."
exact: "most of the other ' is a very important. The democratization
of this is a very important and another particularly powerful
and even more so that"
Longer, more varied output. Speed cost unmeasurable (28-29s TTFT
identical both paths on 280w). SwiGLU is not the bottleneck.
Qwen3.6-35B at 500+ words still degrades — this is one contributor
to the multi-source MoE long-context bug. More sources to investigate.
Regression 15/15 + tokenizer 4/4 PASS.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: README.ko.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,6 +76,8 @@ Chunk-RAG가 잘못된 섹션을 검색하면, 모델은 **"모른다"고 하지
76
76
77
77
> **v2 후속 — Working Memory Cliff (2026-04-11)**: v1 결과를 더 큰 grid로 확장 측정했습니다 (1B/3B 모델, ctx 256-2048, 204 NIAH trials + FP32-weights 통제 실험). 두 모델 모두 명목 128K context window의 **1% 미만**에서 sharp cliff가 존재합니다 (1B Q8 cliff 512-1024, 3B Q4 cliff 1024-1280을 **step function**으로). 6.4× KV 압축은 20개 cell 중 18개에서 fp32 baseline과 bit-for-bit 일치 — cliff는 model property이지 KV/weight quantization artifact가 아닙니다. 정직한 재해석: Beyond RAG는 *유효* working memory 안에 들어가는 문서에 대해서만 동작하며, 그 크기는 명목 context window의 100분의 1에서 1000분의 1입니다. 전체 tech report: [`docs/paper/working-memory-cliff.md`](docs/paper/working-memory-cliff.md). HuggingFace blog post draft: [`docs/paper/hf-blog-draft.md`](docs/paper/hf-blog-draft.md).
78
78
79
+
> **v3.17 MoE SwiGLU exact expf — Qwen3.6 coherence 마진 (2026-04-20)**: MoE `swiglu_fused`가 Schraudolph 근사 (~2% 오차) 대신 exact `expf` 기본 사용. R27-29에서 DeltaNet에는 적용했지만 MoE는 fast_exp 유지 중이었음. 30 MoE 레이어 × 500+ 토큰에서 오차 누적. 수정 후 400-word Qwen3.6 프롬프트가 더 길고 다양한 continuation 생성. 속도 비용: 측정 불가 (SwiGLU 병목 아님; 280w에서 28-29s TTFT 동일). Opt-out: `TQ_MOE_FAST_EXP=1`. 500+ 단어 degradation 여전 (다원 버그 중 한 원인 해결). 15/15 regression PASS. v0.24.0.
80
+
79
81
> **v3.16 ★★ 긴 프롬프트 silent-truncation 수정 (2026-04-20)**: 4096 chars (~700 영어 단어) 이상 프롬프트가 `tq_generate.c`의 4096-token caller 버퍼에 의해 조용히 잘려나가고 있었습니다. BPE가 문자 단위로 먼저 토큰화되기 때문에 4096 cap이 merge 전에 적용되어 char 4096 이후 텍스트가 사라졌습니다. 수정: 32768로 확장 + dynamic sizeof. OpenMythos reference-diff 진단: HF Qwen3-0.6B가 561-word 문서를 698 토큰으로, 우리 엔진은 684로 토큰화 — 우리 마지막 토큰 decoded = `". The abacus"` (텍스트 **시작 부분**!), truncation 증명. 수정 후 Qwen3.5-4B (dense hybrid) 561-word 문서 coherent: *"the future of AI is not just about what we can do with it - it's about how we think about what matters most to us."* ✓. Qwen3.6-35B MoE hybrid는 여전히 561w에서 repetition loop — 버그가 **MoE feedback accumulation at long positions**로 격리됨 (DeltaNet + 토크나이저는 Qwen3.5-4B 성공으로 정상 입증). 15/15 regression PASS. v0.23.0: [`docs/RELEASE_NOTES.md`](docs/RELEASE_NOTES.md).
80
82
81
83
> **v3.15 Qwen3.6 chunked batched prefill (+30% TTFT, 2026-04-20)**: Batched MoE dispatch를 8개 토큰 청크로 분할 (`TQ_MOE_BATCH_CHUNK` 조정 가능) — 소형-N 안전 영역을 유지하며 batched 속도 이득 대부분 회복. State (KV cache, DeltaNet ssm)가 이미 driver call 간 영속이므로 의미론적으로 올바름. Qwen3.6-35B IQ4_XS 실측: 44-word prose TTFT **12.6s → 7.0s (+44%)**, 280-word **38.0s → 29.4s (+29%)**, 동일한 정확한 요약. ~300 단어 문서까지 검증. 500+ 단어에서는 별개 accumulation 버그 (batched, per-token 모두 실패 — MoE scatter 버그와 다른 KV/DeltaNet-state 이슈). 15/15 regression PASS. v0.22.0: [`docs/RELEASE_NOTES.md`](docs/RELEASE_NOTES.md).
Copy file name to clipboardExpand all lines: README.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -167,6 +167,8 @@ The bug was using the same tool for both. The fix is using each for what it's go
167
167
168
168
> **v3.2 batched prefill (2026-04-16):** Prompt prefill was the widest gap vs llama.cpp (40-50× slower). A new `tq_forward_batch` path uses batched matrix-matrix matmul via Apple AMX (`cblas_sgemm`-inspired, 1.2 TFLOPS). **Now enabled by default on all supported architectures** (Llama family, both FP32 KV and default `turbo_kv_4b` KV compression modes). On Llama-3.2-1B Q8 with a ~250-token prompt: **42.7s → 5.9s end-to-end** (**7.2× total**, with default KV compression). Output bit-identical to per-token baseline. Commits `ed4b087`, `672fea2`, `f4934e9`, plus quant K cache write support.
169
169
170
+
> **v3.17 MoE SwiGLU exact expf — Qwen3.6 coherence margin (2026-04-20):** MoE `swiglu_fused` now uses exact `expf` by default instead of Schraudolph (~2% per-call error). R27-29 had fixed this for DeltaNet but MoE kept fast_exp. With 30 MoE layers × 500+ tokens, the error compounds. After fix, 400-word Qwen3.6 prompts produce longer, more varied continuation. Speed cost: unmeasurable (SwiGLU not bottleneck; 28-29s TTFT identical before/after on 280w). Opt-out: `TQ_MOE_FAST_EXP=1`. 500+ word degradation still exists (multi-source bug, this is one contributor). 15/15 regression PASS. v0.24.0.
171
+
170
172
> **v3.16 ★★ Prompt buffer silent-truncation FIXED (2026-04-20):** Prompts longer than ~4096 chars (~700 words of English) were being silently cut off by a 4096-token caller buffer in `tq_generate.c`. Our BPE is char-level first then merged, so the 4096 cap hit BEFORE merges reduced the count. Text past char 4096 was gone. Fix: bumped to 32768 with dynamic sizeof. Diagnostic via OpenMythos reference-diff: HF Qwen3-0.6B tokenized 561-word doc to 698 tokens, our engine to 684 — and our last tokens decoded to `". The abacus"` (from the BEGINNING of the text!), proving truncation. After fix Qwen3.5-4B (dense hybrid) handles 561-word document coherently: *"the future of AI is not just about what we can do with it - it's about how we think about what matters most to us."* ✓. Qwen3.6-35B MoE hybrid STILL fails at 561w with repetition loop — bug now isolated to **MoE feedback accumulation at long positions** (DeltaNet and tokenization proven correct by Qwen3.5-4B working). 15/15 regression PASS. v0.23.0: [`docs/RELEASE_NOTES.md`](docs/RELEASE_NOTES.md).
171
173
172
174
> **v3.15 Qwen3.6 chunked batched prefill (+30% TTFT, 2026-04-20):** Batched MoE dispatch ran in chunks of 8 tokens (configurable via `TQ_MOE_BATCH_CHUNK`) preserves the small-N safe region while recovering most of the batched speedup. State (KV cache, DeltaNet ssm) is already persistent across driver calls so chunking is semantically correct. Measured on Qwen3.6-35B IQ4_XS: 44-word prose TTFT **12.6s → 7.0s (+44%)**, 280-word **38.0s → 29.4s (+29%)**, same correct summaries. Tested up to ~300 word documents. 500+ words shows a separate accumulation bug (both paths — batched and per-token — fail, indicating KV/DeltaNet-state issue distinct from the MoE scatter bug). 15/15 regression PASS. v0.22.0: [`docs/RELEASE_NOTES.md`](docs/RELEASE_NOTES.md).
MoE SwiGLU activation now uses **exact expf** by default, replacing
14
+
the ~2% error Schraudolph approximation. On Qwen3.6-35B this pushes
15
+
back the long-context degradation boundary — 400-word documents now
16
+
produce noticeably more coherent continuation. Speed cost:
17
+
**unmeasurable** (SwiGLU is not the hot path).
18
+
19
+
### Change
20
+
21
+
`src/engine/tq_moe.c:swiglu_fused` now routes through `expf` scalar
22
+
loop by default. Opt-out: `TQ_MOE_FAST_EXP=1` reverts to NEON
23
+
Schraudolph path (for benchmarking only).
24
+
25
+
### A/B (Qwen3.6-35B IQ4_XS, 400-word prose + "In summary,")
26
+
27
+
|| Output |
28
+
|---|---|
29
+
| default (fast) | "most AI/ML (AI/ML) is a powerful tool for large-scale data processing." |
30
+
|**exact expf**| "most of the other ' is a very important. The democratization of this is a very important and another particularly powerful and even more so that" |
31
+
32
+
Longer, more varied output. Still not perfect at 400w+, but the
33
+
degradation curve is noticeably softer. 280-word prose unchanged
34
+
(already coherent pre-fix).
35
+
36
+
### Speed
37
+
38
+
Speed test on Qwen3.6-35B 280-word prompt (TTFT + decode):
39
+
- default fast: 28-29s TTFT, 8.9-9.3 tok/s decode
40
+
- exact expf: 28-29s TTFT, 9.0-9.3 tok/s decode
41
+
42
+
Identical within noise. SwiGLU is not a bottleneck on CPU.
43
+
44
+
### Known remaining
45
+
46
+
Qwen3.6-35B at 500+ words still degrades (repetition loops on some
47
+
prompts). The MoE long-context accumulation bug has MULTIPLE
48
+
compounding sources; exact expf is one contributor, not the full
49
+
fix. Next investigation targets: MoE router softmax stability at
50
+
long positions, expert scale factor correctness, DeltaNet state
0 commit comments