See a history of my research by checking out my google scholar or my DBLP pages.^{1}

These are topics that I am thinking about regularly now.

- Oblivious RAM (ORAM)
- Structured Encryption (StE)
- Leakage-Abuse Attacks (LAAs)
- Private Information Retrieval (PIR)

Below a list of topics that I’ve read about and would like to work on again in the future. For some, I have specific questions I’d like to attack, and for others, my interest is more open ended.

- Secret sharing, especially with variable access structures
- Applications of total function complexity to cryptography and how it captures problems which are not total
- Time-memory trade-offs, as well as the quantum variants
- Post-quantum cryptography, including analyzing security against quantum attacks and applications of homomorphic lattice encryption
- Quantum cryptography, and quantum pseudo-randomness, like these papers for example

I’d like to see these questions resolved. If you’re interested in attacking any of them, please shoot me a message, and I can give you more thoughts/details about them. Or if any have been closed, please let me know! I’d be excited to see how they are resolved.

Our paper proves, in a balls in bins model, that an ORAM with

*p*=*O*(*N*^{1/2}) overhead running in a single round requires at least*s*=*Ω*(*N*/*p*) bits of client memory. But there are some very simple follow up questions that we failed to answer with our technique.- Can this bound be extended to show the bandwidth-memory trade-off is
*s*⋅*p*≥*Ω*(*N*)? Our proofs fail when*p*>*N*^{1/2}, but the best known construction matches the trade-off. - Can the bound be moved outside of the balls in bins model? Our techniques do not easily lend themselves to a compression proof, but the state of the art bounds are not in the balls in bins model.
- What lower bounds, in any model, exist for
*k*-round ORAM for constant*k*> 1? The best known constructions achieve overhead*Õ*(*N*^{1/(k+1)}) overhead and client memory, which I conjecture is optimal (up to log factors).

- Can this bound be extended to show the bandwidth-memory trade-off is
This paper introduces the notion of (

*s*,ℓ)-security for adversaries observing a server for*s*windows of length ℓ. It also proves that (*s*,ℓ)-security requires*Ω*(log*n*) overhead for*s*≥ 3 and ℓ ≥ 1 or for*s*≥ 4 and ℓ ≥ 0. Prior work had already shown the overhead is*Θ*(log ℓ) for*s*= 1 with a bound and construction. There is also a simple construction with*Θ*(1) overhead for (2,0)-security. But what is the best construction or lower bound for (2,ℓ)-security for ℓ ≥ 1 and for (3,0)-security?What is the best bounded-memory attack and lower bound for distinguishing a truncated permutation from a random function? Prior work summarizes the attacks and bounds for the setting with unbounded memory, which is solved completely. Even a conjectured trade-off is not obvious, because the optimal attack does not require remembering every past input.

What is the best attack for distinguishing the XOR of two random permutations from a random function? In the appendix of this paper, they bound the distinguishing advantage with

*q*samples by (*q*/*N*)^{1.5}, but I’m unaware of a matching attack.Can the bounded-memory distinguishing bound for a random permutation versus a random function be slightly improved? This paper proves that the advantage for an

*m*-bit attack with*q*samples is at most*O*(*m**q*log*q*/*N*), but the best attack is*Ω*(*m**q*/*N*). I suspect the log*q*is just an artifact of the proof technique and a more direct technique will remove it, especially because there are other bounds which bound the divergence by*O*(*m**q*/*N*) or*O*(*m**q*/*N*log*N*) and apply Pinsker’s inequality.

Last updated: 2023-11

There is currently an error on my DBLP page, since Alexander Hoover is a somewhat popular a name.↩︎