Skip to content

Commit f753e8a

Browse files
committed
update data
1 parent 93f51e0 commit f753e8a

3 files changed

Lines changed: 1869 additions & 2 deletions

File tree

database/database.json

Lines changed: 231 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85855,5 +85855,236 @@
8585585855
"title": "humanizer",
8585685856
"summary": "Claude Code skill that removes signs of AI-generated writing from text \n A Claude Code skill that removes signs of AI-generated writing from text, making it sound more natural and human. bash mkdir -p .claudeskills git clone httpsgithub.combladerhumanizer.git .claudeskillshumanizer If you already have this repo cloned or you downloaded SKILL.md, copy the skill file into Claude Codes skills directory bash mkdir -p .claudeskillshumanizer",
8585785857
"tags": []
85858+
},
85859+
"https://github.com/petergpt/bullshit-benchmark": {
85860+
"extra-tags": [
85861+
"detection",
85862+
"benchmark",
85863+
"ai",
85864+
"prompts"
85865+
],
85866+
"date": "2026-03-18",
85867+
"title": "bullshit-benchmark",
85868+
"summary": "BullshitBench measures whether AI models challenge nonsensical prompts instead of confidently answering them, created by Peter Gostev. \n BullshitBench v2 BullshitBench measures whether models detect nonsense, call it out clearly, and avoid confidently continuing with invalid assumptions. The screenshots below follow the same flow as viewerindex.v2.html, starting with the main chart. Primary leaderboard-style view showing each model's greenamberred split. !BullshitBench v2 - Detection Rate by Modeldocsimagesv2-detection-rate-by-model.png Detection mix by domain to compare overall performance vs each domain at a glance.",
85869+
"tags": [
85870+
"python"
85871+
]
85872+
},
85873+
"https://github.com/hanxiao/flash-kmeans-mlx": {
85874+
"extra-tags": [
85875+
"sklearn",
85876+
"apple",
85877+
"cluster"
85878+
],
85879+
"date": "2026-03-18",
85880+
"title": "flash-kmeans-mlx",
85881+
"summary": "IO-aware batched K-Means for Apple Silicon, ported from Flash-KMeans (Triton/CUDA) to pure MLX. Up to 94x faster than sklearn. \n IO-aware batched K-Means for Apple Silicon, ported from Flash-KMeanshttpsgithub.comsvg-projectflash-kmeans TritonCUDA to pure MLX. 500K points, 128 dimensions, K1000 clustered in 0.76s on M3 Ultra -- 160x faster than sklearn. Uses custom Metal kernels for argmax, fused addmm assignment, and multi-iteration compiled execution. Full Fashion-MNIST 70K samples, 784 dimensions, K10 clustered in 0.12s on M3 Ultra, 6x faster than sklearn 0.74s. Left K-Means cluster assignments. Right ground truth labels. Visualization via mlx-vishttpsgithub.comhanxiaomlx-vis UMAP.",
85882+
"tags": [
85883+
"kmeans",
85884+
"mlx",
85885+
"python",
85886+
"batch-kmeans",
85887+
"machine-learning",
85888+
"apple-silicon",
85889+
"metal",
85890+
"clustering",
85891+
"gpu"
85892+
]
85893+
},
85894+
"https://github.com/volcengine/OpenViking": {
85895+
"extra-tags": [
85896+
"link",
85897+
"context",
85898+
"github",
85899+
"ai"
85900+
],
85901+
"date": "2026-03-18",
85902+
"title": "OpenViking",
85903+
"summary": "OpenViking is an open-source context database designed specifically for AI Agents(such as openclaw). OpenViking unifies the management of context (memory, resources, and skills) that Agents need through a file system paradigm, enabling hierarchical context delivery and self-evolving. \n English READMECN.md Website GitHub Issues Docs !release-shieldrelease-link !github-stars-shieldgithub-stars-link !github-issues-shieldgithub-issues-shield-link !github-contributors-shieldgithub-contributors-link !license-shieldlicense-shield-link !last-commit-shieldlast-commit-shield-link Join our Community Lark Group WeChat Discord X In the AI era, data is abundant, but high-quality context is hard to come by. When building AI Agents, developers often face these challenges",
85904+
"tags": [
85905+
"memory",
85906+
"context-database",
85907+
"agent",
85908+
"skill",
85909+
"context-engineering",
85910+
"filesystem",
85911+
"openclaw",
85912+
"agentic-rag",
85913+
"opencode",
85914+
"python",
85915+
"clawbot",
85916+
"llm",
85917+
"rag",
85918+
"ai-agents"
85919+
]
85920+
},
85921+
"https://github.com/NVlabs/cutile-rs": {
85922+
"extra-tags": [
85923+
"programming",
85924+
"kernel",
85925+
"features"
85926+
],
85927+
"date": "2026-03-18",
85928+
"title": "cutile-rs",
85929+
"summary": "cuTile Rust provides a safe, tile-based kernel programming DSL for the Rust programming language. It features a safe host-side API for passing tensors to asynchronously executed kernel functions. \n cuTile Rust cutile-rs is a research project providing a safe, tile-based kernel programming DSL for the Rust programming language. It features a safe host-side API for passing tensors to asynchronously executed kernel functions. We are excited to release this research project as a demonstration of how GPU programming can be made available in the Rust ecosystem. The software is in an early stage -alpha and under active development you should expect bugs, incomplete features, and API breakage as we work to improve it. That being said, we hope you'll be interested to try it in your work and help shape its direction by providing feedback on your experience.",
85930+
"tags": [
85931+
"rust"
85932+
]
85933+
},
85934+
"https://github.com/BrianPugh/cyclopts": {
85935+
"extra-tags": [
85936+
"install",
85937+
"svg",
85938+
"documentation"
85939+
],
85940+
"date": "2026-03-18",
85941+
"title": "cyclopts",
85942+
"summary": "Intuitive, easy CLIs based on python type hints. \n !Python compathttpsimg.shields.iobadgepython-3.10-blue.svg Documentation httpscyclopts.readthedocs.io Source Code httpsgithub.comBrianPughcyclopts Cyclopts is a modern, easy-to-use command-line interface CLI framework that aims to provide an intuitive efficient developer experience. Cyclopts requires Python 3.10 to install Cyclopts, run console pip install cyclopts python from cyclopts import run def fooloops int for i in rangeloops",
85943+
"tags": [
85944+
"typehints",
85945+
"python",
85946+
"shell",
85947+
"cli",
85948+
"argument-parser"
85949+
]
85950+
},
85951+
"https://github.com/ashvardanian/NumKong": {
85952+
"extra-tags": [
85953+
"arm",
85954+
"v",
85955+
"geospatial",
85956+
"mixed-precision"
85957+
],
85958+
"date": "2026-03-18",
85959+
"title": "NumKong",
85960+
"summary": "SIMD-accelerated distances, dot products, matrix ops, geospatial & geometric kernels for 16 numeric types \u2014 from 6-bit floats to 64-bit complex \u2014 across x86, Arm, RISC-V, and WASM, with bindings for Python, Rust, C, C++, Swift, JS, and Go \ud83d\udcd0 \n NumKong previously SimSIMD delivers mixed-precision numerics that are often faster and more accurate than standard BLAS libraries in a 5 MB binary, across C, C, Rust, Python, Go, JavaScript, and Swift. Over 1500 hand-tuned SIMD kernels for x86, Arm, RISC-V, and WASM power Unumhttpswww.unum.cloud's open-source USearchhttpsgithub.comunum-cloudusearch search engine and the DBMS AI products built on it.",
85961+
"tags": [
85962+
"matrix-multiplication",
85963+
"numpy",
85964+
"cpp",
85965+
"c",
85966+
"information-retrieval",
85967+
"simd",
85968+
"golang",
85969+
"javascript",
85970+
"scipy",
85971+
"vector-search",
85972+
"swift",
85973+
"blas",
85974+
"assembly",
85975+
"rust",
85976+
"tensor",
85977+
"metrics",
85978+
"arm-neon"
85979+
]
85980+
},
85981+
"https://x.com/santangelx/status/2031888836205137936": {
85982+
"extra-tags": [
85983+
"tool",
85984+
"codex",
85985+
"rag",
85986+
"tokens"
85987+
],
85988+
"date": "2026-03-12",
85989+
"title": "Twitter @santangelx",
85990+
"summary": "@jxnlco Checkout https://t.co/l81CXK53Gi \n\nfree tool that runs locally and ACTUALLY makes a difference when using codex \n\nsure sure, rag is dead\u2026 but tokens are expensive and colgrep fixes search",
85991+
"tags": [
85992+
"twitter"
85993+
]
85994+
},
85995+
"https://x.com/Starwatcher_vc/status/2032695524134912282": {
85996+
"extra-tags": [
85997+
"testing"
85998+
],
85999+
"date": "2026-03-14",
86000+
"title": "Twitter @Starwatcher_vc",
86001+
"summary": "@bri4nr33d Always be testing. I'm curious about colgrep from @LightOnIO. Haven't had time to look into it.",
86002+
"tags": [
86003+
"twitter"
86004+
]
86005+
},
86006+
"https://x.com/antoine_chaffin/status/2033966244760604952": {
86007+
"extra-tags": [
86008+
"sota",
86009+
"pylate",
86010+
"data"
86011+
],
86012+
"date": "2026-03-17",
86013+
"title": "Twitter @antoine_chaffin",
86014+
"summary": "You've been warned for a long time, but if you are not using PyLate to plug your data and claim your free SOTA, you are really missing out at this point\nhttps://t.co/D2OTbzi0DM\n\nClaim your free SOTA today: https://t.co/OjQLWzg2mx https://t.co/wZcV21n8pa",
86015+
"tags": [
86016+
"twitter"
86017+
]
86018+
},
86019+
"https://x.com/antoine_chaffin/status/2033965709097586866": {
86020+
"extra-tags": [
86021+
"http",
86022+
"twitter-api"
86023+
],
86024+
"date": "2026-03-17",
86025+
"title": "Twitter @antoine_chaffin",
86026+
"summary": "See you on Thursday, it's time to claim the one piece (Pretty shocked by this one ngl) https://t.co/GXHPxjlVZH https://t.co/k4pIvcaldl",
86027+
"tags": [
86028+
"twitter"
86029+
]
86030+
},
86031+
"https://x.com/antoine_chaffin/status/2033845690900631941": {
86032+
"extra-tags": [
86033+
"pylate",
86034+
"cpu"
86035+
],
86036+
"date": "2026-03-17",
86037+
"title": "Twitter @antoine_chaffin",
86038+
"summary": "@Robro612 @HaochengXiUCB Omg! Nice!!\nI wonder if we should add it for PyLate because I need to check the perf on CPU (maybe we can select one or the other impl depending on device though), but I will definitely use it in other projects! Thanks for the ping!!\nAlso cc @raphaelsrty and @bclavie \nVery cool",
86039+
"tags": [
86040+
"twitter"
86041+
]
86042+
},
86043+
"https://x.com/Robro612/status/2033750881368264795": {
86044+
"extra-tags": [
86045+
"kmeans"
86046+
],
86047+
"date": "2026-03-17",
86048+
"title": "Twitter @Robro612",
86049+
"summary": "@HaochengXiUCB @antoine_chaffin wake up new kmeans impl dropped \u26a1\ufe0f",
86050+
"tags": [
86051+
"twitter"
86052+
]
86053+
},
86054+
"https://x.com/HaochengXiUCB/status/2033693755791052804": {
86055+
"extra-tags": [
86056+
"kmeans",
86057+
"algorithm",
86058+
"modern"
86059+
],
86060+
"date": "2026-03-16",
86061+
"title": "Twitter @HaochengXiUCB",
86062+
"summary": "\ud835\uddde-\ud835\uddfa\ud835\uddf2\ud835\uddee\ud835\uddfb\ud835\ude00 \ud835\uddf6\ud835\ude00 \ud835\ude00\ud835\uddf6\ud835\uddfa\ud835\uddfd\ud835\uddf9\ud835\uddf2. \ud835\udde0\ud835\uddee\ud835\uddf8\ud835\uddf6\ud835\uddfb\ud835\uddf4 \ud835\uddf6\ud835\ude01 \ud835\uddf3\ud835\uddee\ud835\ude00\ud835\ude01 \ud835\uddfc\ud835\uddfb \ud835\uddda\ud835\udde3\ud835\udde8\ud835\ude00 \ud835\uddf6\ud835\ude00\ud835\uddfb\u2019\ud835\ude01.\n\nThat\u2019s why we built Flash-KMeans \u2014 an IO-aware implementation of exact k-means that rethinks the algorithm around modern GPU bottlenecks.\n\nBy attacking the memory bottlenecks directly, https://t.co/NSqaTHHyIz",
86063+
"tags": [
86064+
"twitter"
86065+
]
86066+
},
86067+
"https://x.com/antoine_chaffin/status/2033966897197142518": {
86068+
"extra-tags": [
86069+
"coding"
86070+
],
86071+
"date": "2026-03-17",
86072+
"title": "Twitter @antoine_chaffin",
86073+
"summary": "@Robro612 @HaochengXiUCB @raphaelsrty @bclavie what velocity really means in the vibe coding era https://t.co/BL34c9d7iM",
86074+
"tags": [
86075+
"twitter"
86076+
]
86077+
},
86078+
"https://x.com/matospiso/status/2033819645027746191": {
86079+
"extra-tags": [
86080+
"sparse",
86081+
"retrieval"
86082+
],
86083+
"date": "2026-03-17",
86084+
"title": "Twitter @matospiso",
86085+
"summary": "bullish on sparse retrieval \nhttps://t.co/6fwiB04iqj",
86086+
"tags": [
86087+
"twitter"
86088+
]
8585886089
}
8585986090
}

database/pipeline.pkl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
version https://git-lfs.github.com/spec/v1
2-
oid sha256:d46d7046f0ed45c0d8696a856ebddbcd148f60feb53406780629f995e0a70d4f
3-
size 76725032
2+
oid sha256:89933378bf90ee92cdbee770b59bb1dc00dd4ccd4edbc01c3e952e53b86dc9a6
3+
size 76882311

0 commit comments

Comments
 (0)