|
" banned = [\"__\", \"import\", \"open(\", \"exec(\", \"eval(\", \"os.\", \"sys.\", \"pd.read\", \"to_csv\", \"to_pickle\", \"to_sql\"]\n", |
You can still call os.system in this situation. So ideally some form of code execution sandboxing would probably be better.
SAFE_GLOBALS = {"pd": pd, "np": np}
def run_generated_pandas(code: str, df_local: pd.DataFrame):
banned = ["__", "import", "open(", "exec(", "eval(", "os.", "sys.", "pd.read", "to_csv", "to_pickle", "to_sql"]
if any(b in code for b in banned): raise ValueError("Unsafe code rejected.")
loc = {"df": df_local.copy()}
exec(code, SAFE_GLOBALS, loc)
return {k:v for k,v in loc.items() if k not in ("df",)}
still allows this to be executed.
run_generated_pandas("getattr(getattr(np._pytesttester, bytes([111,115]).decode('ascii')), bytes([115, 121, 115, 116, 101, 109]).decode('ascii'))('calc')")
Increasing the complexity makes the attack harder, but that doesn't make it impossible and restricting python like this is a really hard problem, which is why starting a sandbox to run these kinds of things can be easier and more provably secure.
AI-Tutorial-Codes-Included/Data Science/Building an End-to-End Data Science Workflow with Machine Learning, Interpretability, and Gemini AI Assistance.ipynb
Line 1460 in 52570e8
You can still call os.system in this situation. So ideally some form of code execution sandboxing would probably be better.
still allows this to be executed.
run_generated_pandas("getattr(getattr(np._pytesttester, bytes([111,115]).decode('ascii')), bytes([115, 121, 115, 116, 101, 109]).decode('ascii'))('calc')")Increasing the complexity makes the attack harder, but that doesn't make it impossible and restricting python like this is a really hard problem, which is why starting a sandbox to run these kinds of things can be easier and more provably secure.