Skip to content

Commit c1459d9

Browse files
committed
Update README to include support for Phi-3, IBM Granite 3.2+, and IBM Granite 4.0 models
1 parent a8759a2 commit c1459d9

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
<strong>Llama3</strong> models written in <strong>native Java</strong> automatically accelerated on GPUs with <a href="https://github.com/beehive-lab/TornadoVM" target="_blank"><strong>TornadoVM</strong></a>.
2020
Runs Llama3 inference efficiently using TornadoVM's GPU acceleration.
2121
<br><br>
22-
Currently, supports <strong>Llama3</strong>, <strong>Mistral</strong>, <strong>Qwen2.5</strong>, <strong>Qwen3</strong> and <strong>Phi3</strong> , <strong> IBM Granite 3.1+ </strong> models in the GGUF format.
22+
Currently, supports <strong>Llama3</strong>, <strong>Mistral</strong>, <strong>Qwen2.5</strong>, <strong>Qwen3</strong>, <strong>Phi-3</strong>, <strong> IBM Granite 3.2+ </strong> and <strong> IBM Granite 4.0 </strong> models in the GGUF format.
2323
Also, it is used as GPU inference engine in
2424
<a href="https://docs.quarkiverse.io/quarkus-langchain4j/dev/gpullama3-chat-model.html" target="_blank">Quarkus</a>
2525
and

0 commit comments

Comments
 (0)