Image

Important: Registration is free but mandatory. Registration deadline: Feb 18, 2026, 11:59 PM.

Feb 20, 2026 (Friday) at ASRC Auditorium, City College, 85 St. Nicholas Terrace, New York, NY 10031.

Program

09:00 – 10:00. Introduction/Coffee
10:00 – 10:50. Mingyuan Wang (NYU)
Limitations of Input Filtering for LLM from a Cryptographic Lens
11:00 – 11:50. Tugce Ozdemir (CUNY)
Towards Verifiable AI with Lightweight Cryptographic Proofs of Inference
12:00 – 02:00. Lunch
02:00 – 02:50. Rahul Ilango (IAS)
Gödel in Cryptography: Zero-Knowledge for NP With No Interaction, No Setup, and Perfect Soundness
03:00 – 03:50. Chelsea Komlo (NEAR)
Golden: Lightweight Non-Interactive Distributed Key Generation

Registration Very important

Registration is free but mandatory. Registration deadline: Feb 18, 2026, 23:59 (ET). Only registered participants will be allowed to enter.

Venue

Address: ASRC Auditorium, City College, 85 St. Nicholas Terrace, New York, NY 10031.

ASRC Auditorium, City College, 85 St. Nicholas Terrace, New York, NY 10031 (Visitor information page: here)

[Directions]

Organizers

Fabrice Benhamouda (Amazon Web Services)
Daniel Escudero (TACEO)
Tal Rabin (Amazon Web Services)
Mariana Raykova (Google)
with the help and support of Rosario Gennaro.

Support

NY CryptoDay is sponsored by Google.

Google

Abstracts

  • Limitations of Input Filtering for LLM from a Cryptographic Lens / Mingyuan Wang (NYU)

    Large language models (LLMs) are increasingly adopted in everyday real-world applications. While they provide powerful new capabilities, their deployment also raises serious alignment and safety challenges. A widely used mitigation strategy is input filtering, in which a lightweight guard model screens user prompts before they reach the main LLM. Recent work by Ball et al. shows that this approach is fundamentally limited: under standard cryptographic assumptions, no efficient input filter can reliably distinguish malicious prompts from benign ones. In this talk, I will discuss this result together with our recent work demonstrating that this limitation is not merely theoretical, but leads to practical attacks on deployed LLM systems, exploiting the inherent resource asymmetry between guard models and the models they protect.


  • Towards Verifiable AI with Lightweight Cryptographic Proofs of Inference / Tugce Ozdemir (CUNY)

    Modern deep neural networks, despite their widespread success in domains from biology to large language models, face a critical security challenge when deployed as external cloud-based services. How do we know that the answers come from a trusted source or are even correct? A naive solution of local model execution is infeasible due to the large computational resources needed to run these networks. Current research addresses this by using cryptographic proofs to efficiently verify model outputs. However, this approach encounters significant computational overhead, as general cryptographic frameworks struggle with the complex non-arithmetic functions and sheer scale of contemporary neural networks. This paper proposes a novel and scalable verification method that avoids the computational cost of full model re-execution and cryptographic proofs. We present a protocol for verifiable inference of large AI models using lightweight cryptographic tools and based on statistical properties of the networks, making our approach faster than previous approaches; the prover needs to only commit to the execution trace of inference of the AI model and open a small number of entries. We also present a version of the protocol in the refereed model with a logarithmic number of steps. We experimentally validate our protocol, showing that AI models exhibit the properties necessary for our approach, our protocol is resilient to attacks, and that we achieve orders-of-magnitude improved performance over prior work.


  • Gödel in Cryptography: Zero-Knowledge for NP With No Interaction, No Setup, and Perfect Soundness / Rahul Ilango (IAS)

    Gödel showed that there are true but unprovable statements. This was bad news for Hilbert, who hoped that every true statement was provable. In this talk, I’ll describe why Gödel’s result is, in fact, good news for cryptography.

    Specifically, Gödel’s result allows for the following strange scenario: a cryptographic system S is insecure, but it is impossible to prove that S is insecure. As I will explain, in this scenario (defined carefully), S is secure for nearly all practical purposes.

    Leveraging this idea, we effectively construct — under longstanding assumptions — a classically-impossible cryptographic dream object: “zero-knowledge proofs for NP with no interaction, no setup, and perfect soundness.” As an application, our result lets one give an ordinary mathematical proof that a Sudoku puzzle is solvable without revealing how to solve it. Previously, it was not known how to do this (i.e. how to construct “non-interactive witness hiding proofs”).


  • Golden: Lightweight Non-Interactive Distributed Key Generation / Chelsea Komlo (NEAR)

    In this talk, we will present Golden, a non-interactive Distributed Key Generation (DKG) protocol. The core innovation of Golden is how it achieves public verifiability in a lightweight manner, allowing all participants to non-interactively verify that all other participants followed the protocol correctly. For this reason, Golden can be performed with only one round of (broadcast) communication. Non-interactive DKGs are important for distributed applications; as parties may go offline at any moment, reducing rounds of communication is a desirable feature.


Design a site like this with WordPress.com
Get started