Skip to content

Commit ffef5c1

Browse files
committed
jekyll page rebuild
1 parent 65685e9 commit ffef5c1

2 files changed

Lines changed: 109 additions & 4 deletions

File tree

_posts/2023/2023-04/2023-04-18-FauxPilot-开源插件-GitHub-Copilot .md

Lines changed: 95 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
layout: post
33
title: "FauxPilot:可本地运行的开源 GitHub Copilot (Copilot Plugin)"
44
categories: GitHub Copilot OpenAI 开源插件
5-
tags: Copilot OpenAI 开源插件
5+
tags: Copilot
66
author: Franklinfang
77
---
88

@@ -17,13 +17,105 @@ GitHub Copilot 是 GitHub 去年 6 月推出的人工智能模型,这是一个
1717

1818
近日美国纽约大学计算机科学和工程系助理教授 Brendan Dolan-Gavitt 开源了一个名为 FauxPilot 的项目,根据介绍,这是 GitHub Copilot 的替代品,能够在本地运行并且不会上传用户的数据,如果开发者使用的是自己训练的 AI 模型,也无需再担心生成代码的许可问题。
1919

20-
2120
GitHub Copilot 依赖于 OpenAI Codex,后者是一个基于 GPT-3 的自然语言转代码系统,使用了存储在 GitHub 上的 "数十亿行公共代码" 进行训练。而 FauxPilot 并没有使用 Codex,为了方便开发者使用它依赖了 Salesforce 的 CodeGen 模型,CodeGen 同样也是使用公共开源代码进行训练的。
2221

2322

2423
目前 Salesforce CodeGen 提供了 3.5 亿、20 亿、60 亿 和 160 亿参数的模型,但在 FauxPilot 这边只看到 3.5 亿、60 亿和 160 亿的模型,暂时没有 20 亿模型可用,这就对训练模型需要使用的 GPU 提出了较高的要求。因为 3.5 亿参数的模型仅需要 2GB VRAM;而稍高一个档次的 60 亿参数模型所需要的 VRAM 就大幅上涨到了 13GB,这就需要至少 RTX 3090 的显卡才能跑,就更不用说 160 亿的模型了。
2524

2625

26+
![image](2cbc40a18df03a0d3492702cae9c4f2d.png)
27+
28+
29+
30+
31+
2732

28-
![image](https://raw.githubusercontent.com/frankdevhub/frankdevhub.github.io/master/_posts/2023/2023-04/22cbc40a18df03a0d3492702cae9c4f2d.png)
2933

34+
$ ./launch.sh
35+
[+] Running 2/0
36+
⠿ Container fauxpilot-triton-1 Created 0.0s
37+
⠿ Container fauxpilot-copilot_proxy-1 Created 0.0s
38+
Attaching to fauxpilot-copilot_proxy-1, fauxpilot-triton-1
39+
fauxpilot-triton-1 |
40+
fauxpilot-triton-1 | =============================
41+
fauxpilot-triton-1 | == Triton Inference Server ==
42+
fauxpilot-triton-1 | =============================
43+
fauxpilot-triton-1 |
44+
fauxpilot-triton-1 | NVIDIA Release 22.06 (build 39726160)
45+
fauxpilot-triton-1 | Triton Server Version 2.23.0
46+
fauxpilot-triton-1 |
47+
fauxpilot-triton-1 | Copyright (c) 2018-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
48+
fauxpilot-triton-1 |
49+
fauxpilot-triton-1 | Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
50+
fauxpilot-triton-1 |
51+
fauxpilot-triton-1 | This container image and its contents are governed by the NVIDIA Deep Learning Container License.
52+
fauxpilot-triton-1 | By pulling and using the container, you accept the terms and conditions of this license:
53+
fauxpilot-triton-1 | https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
54+
fauxpilot-copilot_proxy-1 | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
55+
fauxpilot-copilot_proxy-1 | * Debug mode: off
56+
fauxpilot-copilot_proxy-1 | * Running on all addresses (0.0.0.0)
57+
fauxpilot-copilot_proxy-1 | WARNING: This is a development server. Do not use it in a production deployment.
58+
fauxpilot-copilot_proxy-1 | * Running on http://127.0.0.1:5000
59+
fauxpilot-copilot_proxy-1 | * Running on http://172.18.0.3:5000 (Press CTRL+C to quit)
60+
fauxpilot-triton-1 |
61+
fauxpilot-triton-1 | ERROR: This container was built for NVIDIA Driver Release 515.48 or later, but
62+
fauxpilot-triton-1 | version was detected and compatibility mode is UNAVAILABLE.
63+
fauxpilot-triton-1 |
64+
fauxpilot-triton-1 | [[]]
65+
fauxpilot-triton-1 |
66+
fauxpilot-triton-1 | I0803 01:51:02.690042 93 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f6104000000' with size 268435456
67+
fauxpilot-triton-1 | I0803 01:51:02.690461 93 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
68+
fauxpilot-triton-1 | I0803 01:51:02.692434 93 model_repository_manager.cc:1191] loading: fastertransformer:1
69+
fauxpilot-triton-1 | I0803 01:51:02.936798 93 libfastertransformer.cc:1226] TRITONBACKEND_Initialize: fastertransformer
70+
fauxpilot-triton-1 | I0803 01:51:02.936818 93 libfastertransformer.cc:1236] Triton TRITONBACKEND API version: 1.10
71+
fauxpilot-triton-1 | I0803 01:51:02.936821 93 libfastertransformer.cc:1242] 'fastertransformer' TRITONBACKEND API version: 1.10
72+
fauxpilot-triton-1 | I0803 01:51:02.936850 93 libfastertransformer.cc:1274] TRITONBACKEND_ModelInitialize: fastertransformer (version 1)
73+
fauxpilot-triton-1 | W0803 01:51:02.937855 93 libfastertransformer.cc:149] model configuration:
74+
fauxpilot-triton-1 | {
75+
[... lots more output trimmed ...]
76+
fauxpilot-triton-1 | I0803 01:51:04.711929 93 libfastertransformer.cc:321] After Loading Model:
77+
fauxpilot-triton-1 | I0803 01:51:04.712427 93 libfastertransformer.cc:537] Model instance is created on GPU NVIDIA RTX A6000
78+
fauxpilot-triton-1 | I0803 01:51:04.712694 93 model_repository_manager.cc:1345] successfully loaded 'fastertransformer' version 1
79+
fauxpilot-triton-1 | I0803 01:51:04.712841 93 server.cc:556]
80+
fauxpilot-triton-1 | +------------------+------+
81+
fauxpilot-triton-1 | | Repository Agent | Path |
82+
fauxpilot-triton-1 | +------------------+------+
83+
fauxpilot-triton-1 | +------------------+------+
84+
fauxpilot-triton-1 |
85+
fauxpilot-triton-1 | I0803 01:51:04.712916 93 server.cc:583]
86+
fauxpilot-triton-1 | +-------------------+-----------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
87+
fauxpilot-triton-1 | | Backend | Path | Config |
88+
fauxpilot-triton-1 | +-------------------+-----------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
89+
fauxpilot-triton-1 | | fastertransformer | /opt/tritonserver/backends/fastertransformer/libtriton_fastertransformer.so | {"cmdline":{"auto-complete-config":"false","min-compute-capability":"6.000000","backend-directory":"/opt/tritonserver/backends","default-max-batch-size":"4"}} |
90+
fauxpilot-triton-1 | +-------------------+-----------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
91+
fauxpilot-triton-1 |
92+
fauxpilot-triton-1 | I0803 01:51:04.712959 93 server.cc:626]
93+
fauxpilot-triton-1 | +-------------------+---------+--------+
94+
fauxpilot-triton-1 | | Model | Version | Status |
95+
fauxpilot-triton-1 | +-------------------+---------+--------+
96+
fauxpilot-triton-1 | | fastertransformer | 1 | READY |
97+
fauxpilot-triton-1 | +-------------------+---------+--------+
98+
fauxpilot-triton-1 |
99+
fauxpilot-triton-1 | I0803 01:51:04.738989 93 metrics.cc:650] Collecting metrics for GPU 0: NVIDIA RTX A6000
100+
fauxpilot-triton-1 | I0803 01:51:04.739373 93 tritonserver.cc:2159]
101+
fauxpilot-triton-1 | +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
102+
fauxpilot-triton-1 | | Option | Value |
103+
fauxpilot-triton-1 | +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
104+
fauxpilot-triton-1 | | server_id | triton |
105+
fauxpilot-triton-1 | | server_version | 2.23.0 |
106+
fauxpilot-triton-1 | | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace |
107+
fauxpilot-triton-1 | | model_repository_path[0] | /model |
108+
fauxpilot-triton-1 | | model_control_mode | MODE_NONE |
109+
fauxpilot-triton-1 | | strict_model_config | 1 |
110+
fauxpilot-triton-1 | | rate_limit | OFF |
111+
fauxpilot-triton-1 | | pinned_memory_pool_byte_size | 268435456 |
112+
fauxpilot-triton-1 | | cuda_memory_pool_byte_size{0} | 67108864 |
113+
fauxpilot-triton-1 | | response_cache_byte_size | 0 |
114+
fauxpilot-triton-1 | | min_supported_compute_capability | 6.0 |
115+
fauxpilot-triton-1 | | strict_readiness | 1 |
116+
fauxpilot-triton-1 | | exit_timeout | 30 |
117+
fauxpilot-triton-1 | +----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
118+
fauxpilot-triton-1 |
119+
fauxpilot-triton-1 | I0803 01:51:04.740423 93 grpc_server.cc:4587] Started GRPCInferenceService at 0.0.0.0:8001
120+
fauxpilot-triton-1 | I0803 01:51:04.740608 93 http_server.cc:3303] Started HTTPService at 0.0.0.0:8000
121+
fauxpilot-triton-1 | I0803 01:51:04.781561 93 http_server.cc:178] Started Metrics Service at 0.0.0.0:8002

plugin/site-plugin/pom.xml

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -361,12 +361,25 @@
361361
</goals>
362362
<configuration>
363363
<outputDirectory>${project.build.directory}</outputDirectory>
364+
<!-- 如果不添加此节点mybatis的mapper.xml文件都会被漏掉。 -->
364365
<resources>
366+
<resource>
367+
<directory>src/main/java</directory>
368+
<includes>
369+
<include>**/*.yml</include>
370+
<include>**/*.properties</include>
371+
<include>**/*.xml</include>
372+
</includes>
373+
<filtering>false</filtering>
374+
</resource>
365375
<resource>
366376
<directory>src/main/resources</directory>
367377
<includes>
368-
<include>**/*.*</include>
378+
<include>**/*.yml</include>
379+
<include>**/*.properties</include>
380+
<include>**/*.xml</include>
369381
</includes>
382+
<filtering>false</filtering>
370383
</resource>
371384
</resources>
372385
</configuration>

0 commit comments

Comments
 (0)