Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
c611a36
test: add unit tests for GCG core algorithm components
romanlutz May 4, 2026
9a2e7fc
test: add data/config and lifecycle tests for GCG
romanlutz May 4, 2026
612c66f
test: add GCG integration tests with real GPT-2 model
romanlutz May 4, 2026
df31760
TEST: remove run_only_if_all_tests marker from GCG integration tests
romanlutz May 4, 2026
aad9a6f
Merge remote-tracking branch 'origin/main' into gcg-refactor
romanlutz May 4, 2026
fb5cb70
MAINT: fix pre-commit lint issues in GCG tests
romanlutz May 4, 2026
c98af28
MAINT: remove mlflow dependency from GCG, fix Dockerfile
romanlutz May 5, 2026
98795bb
Merge remote-tracking branch 'origin/main' into gcg-refactor
romanlutz May 5, 2026
0739083
MAINT: remove dead gbda_deterministic param, add AML launcher scripts
romanlutz May 6, 2026
db2a096
FIX: remove dead mpa_kwargs passed to MultiPromptAttack.__init__
romanlutz May 6, 2026
e7bee41
MAINT: switch Azure ML baseline to llama-2 (phi-3 has fastchat bug #965)
romanlutz May 6, 2026
9a02400
TEST: add wiring test for IndividualPromptAttack -> MultiPromptAttack
romanlutz May 7, 2026
123243b
TEST: add vicuna integration tests covering non-llama _update_ids path
romanlutz May 7, 2026
bd81029
TEST: add Azure ML GCG e2e test, update notebook to llama-2
romanlutz May 7, 2026
0043339
MAINT: invoke GCG runner via 'python -m', drop scripts/ launchers
romanlutz May 7, 2026
52e7c5d
MAINT: scope pyarrow 3.14 pin to the gcg extra
romanlutz May 8, 2026
9aa1ca1
TEST: run the AML notebook itself instead of duplicating its logic
romanlutz May 8, 2026
076ba36
DOC: regenerate 1_gcg_azure_ml.ipynb with executed cell outputs
romanlutz May 8, 2026
e7137db
FEAT: notebook polls AML job and prints generated suffix
romanlutz May 9, 2026
bee8f3f
MAINT: bump azure-ai-ml to >=1.32.0 to silence ListSecrets telemetry …
romanlutz May 9, 2026
1ea66c2
Merge remote-tracking branch 'origin/main' into gcg-refactor
romanlutz May 10, 2026
4c1ba12
TEST: cover log.log_gpu_memory branches and move to test_log.py
romanlutz May 10, 2026
d32d13b
FEAT: Drop fastchat from GCG, use tokenizer.apply_chat_template (#965)
varshini2305 May 11, 2026
863c490
FEAT: Add Phi-4 GCG config
romanlutz May 12, 2026
122582e
Merge remote-tracking branch 'origin/main' into gcg-fastchat
romanlutz May 13, 2026
1ba1be4
TEST: cover error paths added by fastchat removal in attack_manager.py
romanlutz May 13, 2026
cd9717d
Merge remote-tracking branch 'origin/main' into gcg-fastchat
romanlutz May 13, 2026
bdf9f1d
TEST: write GCG attack logfiles into tmp_path, not cwd
romanlutz May 13, 2026
825d036
Merge remote-tracking branch 'origin/main' into gcg-fastchat
romanlutz May 13, 2026
dcdc247
TEST: assert sample_control changes at most one position
romanlutz May 13, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
220 changes: 74 additions & 146 deletions pyrit/auxiliary_attacks/gcg/attack/base/attack_manager.py
Comment thread
romanlutz marked this conversation as resolved.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ tokenizer_paths: ["meta-llama/Llama-2-7b-chat-hf"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["meta-llama/Llama-2-7b-chat-hf"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["llama-2"]
devices: ["cuda:0"]
train_data: ""
test_data: ""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ tokenizer_paths: ["meta-llama/Meta-Llama-3-8B-Instruct"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["meta-llama/Meta-Llama-3-8B-Instruct"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["llama-3-8b"]
devices: ["cuda:0"]
train_data: ""
test_data: ""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ tokenizer_paths: ["mistralai/Mistral-7B-Instruct-v0.1"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["mistralai/Mistral-7B-Instruct-v0.1"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["mistral"]
devices: ["cuda:0"]
train_data: ""
test_data: ""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ tokenizer_paths: ["microsoft/Phi-3-mini-4k-instruct"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["microsoft/Phi-3-mini-4k-instruct"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["phi-3-mini"]
devices: ["cuda:0"]
train_data: ""
test_data: ""
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
transfer: False
target_weight: 1.0
control_weight: 0.0
progressive_goals: False
progressive_models: False
anneal: False
incr_control: False
stop_on_success: False
verbose: True
allow_non_ascii: False
num_train_models: 1
result_prefix: "results/individual_phi4"
tokenizer_paths: ["microsoft/phi-4"]
tokenizer_kwargs: [{"use_fast": True}]
model_paths: ["microsoft/phi-4"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
devices: ["cuda:0"]
train_data: ""
test_data: ""
n_train_data: 50
n_test_data: 0
control_init: "! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !"
n_steps: 500
test_steps: 50
batch_size: 512
learning_rate: 0.01
topk: 256
temp: 1
filter_cand: True
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ tokenizer_paths: ["lmsys/vicuna-13b-v1.5"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["lmsys/vicuna-13b-v1.5"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["vicuna"]
devices: ["cuda:0"]
train_data: ""
test_data: ""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@ tokenizer_paths: ["meta-llama/Llama-2-7b-chat-hf", "mistralai/Mistral-7B-Instruc
tokenizer_kwargs: [{"use_fast": False}, {"use_fast": False}, {"use_fast": False}, {"use_fast": False}]
model_paths: ["meta-llama/Llama-2-7b-chat-hf", "mistralai/Mistral-7B-Instruct-v0.1", "meta-llama/Meta-Llama-3-8B-Instruct", "lmsys/vicuna-7b-v1.5"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}, {"low_cpu_mem_usage": True, "use_cache": False}, {"low_cpu_mem_usage": True, "use_cache": False}, {"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["llama-2", "mistral", "llama-3-8b", "vicuna"]
devices: ["cuda:0", "cuda:1", "cuda:2", "cuda:3"]
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@ tokenizer_paths: ["meta-llama/Llama-2-7b-chat-hf"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["meta-llama/Llama-2-7b-chat-hf"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["llama-2"]
devices: ["cuda:0"]
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@ tokenizer_paths: ["meta-llama/Meta-Llama-3-8B-Instruct"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["meta-llama/Meta-Llama-3-8B-Instruct"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["llama-3-8b"]
devices: ["cuda:0"]
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@ tokenizer_paths: ["mistralai/Mistral-7B-Instruct-v0.1"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["mistralai/Mistral-7B-Instruct-v0.1"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["mistral"]
devices: ["cuda:0"]
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@ tokenizer_paths: ["microsoft/Phi-3-mini-4k-instruct"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["microsoft/Phi-3-mini-4k-instruct"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["phi-3-mini"]
devices: ["cuda:0"]
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@ tokenizer_paths: ["lmsys/vicuna-7b-v1.5"]
tokenizer_kwargs: [{"use_fast": False}]
model_paths: ["lmsys/vicuna-7b-v1.5"]
model_kwargs: [{"low_cpu_mem_usage": True, "use_cache": False}]
conversation_templates: ["vicuna"]
devices: ["cuda:0"]
2 changes: 1 addition & 1 deletion pyrit/auxiliary_attacks/gcg/experiments/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
from pyrit.auxiliary_attacks.gcg.experiments.train import GreedyCoordinateGradientAdversarialSuffixGenerator
from pyrit.setup.initialization import _load_environment_files

_MODEL_NAMES: list[str] = ["mistral", "llama_2", "llama_3", "vicuna", "phi_3_mini"]
_MODEL_NAMES: list[str] = ["mistral", "llama_2", "llama_3", "vicuna", "phi_3_mini", "phi_4"]
_ALL_MODELS: str = "all_models"


Expand Down
5 changes: 0 additions & 5 deletions pyrit/auxiliary_attacks/gcg/experiments/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ def generate_suffix(
tokenizer_paths: Optional[list[str]] = None,
model_name: str = "",
model_paths: Optional[list[str]] = None,
conversation_templates: Optional[list[str]] = None,
result_prefix: str = "",
train_data: str = "",
control_init: str = _DEFAULT_CONTROL_INIT,
Expand Down Expand Up @@ -81,7 +80,6 @@ def generate_suffix(
tokenizer_paths (Optional[list[str]]): Paths to tokenizer models.
model_name (str): Name identifier for the model.
model_paths (Optional[list[str]]): Paths to model weights.
conversation_templates (Optional[list[str]]): Conversation template names.
result_prefix (str): Prefix for result file paths.
train_data (str): URL or path to training data CSV.
control_init (str): Initial control string for optimization.
Expand Down Expand Up @@ -117,8 +115,6 @@ def generate_suffix(
tokenizer_paths = []
if model_paths is None:
model_paths = []
if conversation_templates is None:
conversation_templates = []
if devices is None:
devices = ["cuda:0"]
if model_kwargs is None:
Expand All @@ -131,7 +127,6 @@ def generate_suffix(
tokenizer_paths=tokenizer_paths,
model_name=model_name,
model_paths=model_paths,
conversation_templates=conversation_templates,
result_prefix=result_prefix,
train_data=train_data,
control_init=control_init,
Expand Down
3 changes: 1 addition & 2 deletions pyrit/auxiliary_attacks/gcg/src/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -19,5 +19,4 @@ WORKDIR /app
# Install PyRIT with GCG extras to get all dependencies
COPY pyproject.toml MANIFEST.in README.md LICENSE /app/
COPY pyrit/ /app/pyrit/
RUN uv pip install -e ".[gcg]" && \
uv pip install "fschat @ git+https://github.com/lm-sys/FastChat.git@2c68a13bfe10b86f40e3eefc3fcfacb32c00b02a"
RUN uv pip install -e ".[gcg]"
Loading
Loading