In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
References
Link | Resource |
---|---|
https://github.com/hwchase17/langchain/issues/1026 | Issue Tracking |
https://github.com/hwchase17/langchain/issues/814 | Exploit Issue Tracking Patch |
https://github.com/hwchase17/langchain/pull/1119 | Patch |
https://twitter.com/rharang/status/1641899743608463365/photo/1 | Exploit |
History
No history.
MITRE Information
Status: PUBLISHED
Assigner: mitre
Published: 2023-04-05T00:00:00
Updated: 2023-04-05T00:00:00
Reserved: 2023-04-05T00:00:00
Link: CVE-2023-29374
JSON object: View
NVD Information
Status : Analyzed
Published: 2023-04-05T02:15:37.340
Modified: 2023-04-17T16:57:22.070
Link: CVE-2023-29374
JSON object: View
Redhat Information
No data.
CWE