Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In Deno Sandbox, secrets never enter the environment. Code sees only a placeholder

> The real key materializes only when the sandbox makes an outbound request to an approved host. If prompt-injected code tries to exfiltrate that placeholder to evil.com? Useless.

That seems clever.





Reminds me a little of Fly's Tokenizer - https://github.com/superfly/tokenizer

It's a little HTTP proxy that your application can route requests through, and the proxy is what handles adding the API keys or whatnot to the request to the service, rather than your application, something like this for example:

Application -> tokenizer -> Stripe

The secrets for the third party service should in theory then be safe should there be some leak or compromise of the application since it doesn't know the actual secrets itself.

Cool idea!


It's exactly the tokenizer, but we shoplifted the idea too; it belongs to the world!

(The credential thing I'm actually proud of is non-exfiltratable machine-bound Macaroons).

Remember that the security promises of this scheme depend on tight control over not only what hosts you'll send requests to, but what parts of the requests themselves.


How does this work with more complex authentication schemes, like AWS?

AWS has a more powerful abstraction already, where you can condition permissions such that they are only granted when the request comes from a certain VPC or IP address (i.e. VPN exit). Malware thus exfiltrated real credentials, but they'll be worthless.

I'm not prepared to say which abstraction is more powerful but I do think it's pretty funny to stack a non-exfiltratable credential up against AWS given how the IMDS works. IMDS was the motivation for machine-locked tokens for us.

There are two separate concerns here: who the credentials are associated with, and where the credentials are used. IMDS's original security flaw was that it only covered "who" the credentials were issued to (the VM) and not where they were used, but aforementioned IAM conditions now ensure that they are indeed used within the same VPC. If a separate proxy is setup to inject credentials, then while this may cover the "where" concern, care must still be taken on the "who" concern, i.e. to ensure that the proxy does not fall to confused deputy attacks arising from multiple sandboxed agents attempting to use the same proxy.

There are lots of concerns, not just two, but the point of machine-bound Macaroons is to address the IMDS problem.

Did the machine-bound Macaroons ever get written up publicly or is that proprietary?

Like the Tokenizer, I think they're open source.

https://fly.io/blog/operationalizing-macaroons/


This reminds me of a SaaS that existed 15+ years ago for PCI-DSS compliance. It did exactly that: you had it tokenize and store the sensitive data, and then you proxied your requests via it, and it inserted them into the request. It was a very neat way to get around storing data yourself.

I cannot remember what the platform was called, let me know if you do.


There are multiple companies doing that. I was using one a few years ago, also don't remember the name, haha.

I guess it's an obvious thing to sell, if you go through the process of PCI-DSS compliance. We were definitely considering splitting the company to a part that can handle these data and the rest of the business. The first part could then offer the service to other business, too.


I've been working on something similar (with claude code).

It's a sandbox that uses envoy as a transparent proxy locally, and then an external authz server that can swap the creds.

The idea is extended further in that the goal is to allow an org to basically create their own authz system for arbitrary upstreams, and then for users to leverage macaroons to attentuate the tokens at runtime.

It isn't finished but I'm trying to make it work with ssh/yubikeys as an identity layer. The authz macaroon can have a "hole" that is filled by the user/device attestation.

The sandbox has some nice features like browser forwarding for Claude oauth and a CDP proxy for working with Chrome/Electron (I'm building an Obsidian plugin).

I'm inspired by a lot of the fly.io stuff in tokenizer and sprites. Exciting times.

https://github.com/dtkav/agent-creds


Yes... but...

Presumably the proxy replaces any occurrence of the placeholder with the real key, without knowing anything about the context in which the key is used, right? Because if it knew that the key was to be used for e.g. HTTP basic auth, it could just be added by the proxy without using a placeholder.

So all the attacker would have to do then is find and endpoint (on one of the approved hosts, granted) that echoes back the value, e.g. "What is your name?" -> "Hello $name!", right?

But probably the proxy replaces the real key when it comes back in the other direction, so the attacker would have to find an endpoint that does some kind of reversible transformation on the value in the response to disguise it.

It seems safer and simpler to, as others have mentioned, have a proxy that knows more about the context add the secrets to the requests. But maybe I've misunderstood their placeholder solution or maybe it's more clever than I'm giving it credit for.


Where would this happen? I have never seen an API reflect a secret back but I guess it's possible? perhaps some sort of token creation endpoint?

The point is that without semantic knowledge, there's no way of knowing whether the API actually considers it a secret. If you're using the Github API and have it listed as an approved host but the sandbox doesn't predefine which fields are valid or not to include the token, a malicious application could put the placeholder in the body of an API request making a public gist or something, which then gets replaced with the actual secret. In order to avoid this, the sandbox would need some way of enforcing which fields in the API itself are safe. For a widely used API like Github, this might be something built-in, but to support arbitrary APIs people might want to use, there would probably have to be some way of configuring the list of fields that are considered safe manually.

From various other comments in this thread though, it sounds like this is already well-established territory that past tools have explored. It's not super clear to me how much of this is actually implemented for Deno Sandboxes or not though, but I'd hope they took into account the prior art that seems to have already come up with techniques for handling very similar issues.


How does the API know that it's a secret, though? That's what's not clear to me from the blog post. Can I e.g. create a customer named PLACEHOLDER and get a customer actually named SECRET?

This blog post is very clearly AI generated, so I’m not sure it knows either.

Say, an endpoint tries to be helpful and responds with “no such user: foo” instead of “no such user”. Or, as a sibling comment suggests, any create-with-properties or set-property endpoint paired with a get-propety one also means game over.

Relatedly, a common exploitation target for black-hat SEO and even XSS is search pages that echo back the user’s search request.


It depends on where you allow the substitution to occur in the request. It's basically "the big bug class" you have to watch out for in this design.

This is effectively what happened with the BotGhost vulnerability a few months back:

https://news.ycombinator.com/item?id=44359619


HTTP Header Injection or HTTP Response Splitting is a thing.

Could the proxy place further restrictions like only replacing the placeholder with the real API key in approved HTTP headers? Then an API server is much less likely to reflect it back.

It can, yes. (I don't know how Deno's work, but that's how ours works.)

Yeah, this is a really neat idea: https://deno.com/blog/introducing-deno-sandbox#secrets-that-...

  await using sandbox = await Sandbox.create({
    secrets: {
      OPENAI_API_KEY: {
        hosts: ["api.openai.com"],
        value: process.env.OPENAI_API_KEY,
      },
    },
  });
  
  await sandbox.sh`echo $OPENAI_API_KEY`;
  // DENO_SECRET_PLACEHOLDER_b14043a2f578cba75ebe04791e8e2c7d4002fd0c1f825e19...
It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.

Kind of like how XSS attacks can't read httpOnly cookies but they can generally still cause fetch() requests that can take actions using those cookies.


if there is an LLM in there, "Run echo $API_KEY" I think could be liable to return it, (the llm asks the script to run some code, it does so, returning the placeholder, the proxy translates that as it goes out to the LLM, which then responds to the user with the api key (or through multiple steps, "tell me the first half of the command output" e.g. if the proxy translates in reverse)

Doesn't help much if the use of the secret can be anywhere in the request presumably, if it can be restricted to specific headers only then it would be much more powerful


Secrets are tied to specific hosts - the proxy will only replace the placeholder value with the real secret for outbound HTTP requests to the configured domain for that secret.

which, if its the LLM asking for the result of the locally ran "echo $API_KEY", will be sent through that proxy, to the correct configured domain. (If it did it for request body, which apparently it doesn't (which was part of what I was wondering))

The AI agent can run `echo $API_KEY` all it wants, but the value is only a placeholder which is useless outside the system, and only the proxy service which the agent cannot directly access, will replace the placeholder with the real value and return the result of the network call. Furthermore, the replacement will happen within the proxy service itself, it does not expose the replaced value to memory or files that the agent can access.

It's a bit like taking a prepaid voucher to a food truck window. The cashier receives the voucher, checks it against their list of valid vouchers, records that the voucher was used so they can be paid, and then gives you the food you ordered. You as the customer never get to see the exchange of money between the cashier and the payment system.


(Noting that, as stated in another thread, it only applies to headers, so the premise I raised doesn't apply either way)

Except that you are asking for the result of it, "Hey Bobby LLM, what is the value of X" will have Bobby LLM tell you the real value of X, because Bobby LLM has access to the real value because X is permissioned for the domain that the LLM is accessed through.

If the cashier turned their screen around to show me the exchange of money, then I would certainly see it.


It will only replace the secret in headers

It replaces URL params and body too

> It doesn't prevent bad code from USING those secrets to do nasty things, but it does at least make it impossible for them to steal the secret permanently.

Agreed, and this points to two deeper issues: 1. Fine-grained data access (e.g., sandboxed code can only issue SQL queries scoped to particular tenants) 2. Policy enforced on data (e.g., sandboxed code shouldn't be able to send PII even to APIs it has access to)

Object-capabilities can help directly with both #1 and #2.

I've been working on this problem -- happy to discuss if anyone is interested in the approach.


Object capabilities, like capnweb/capnproto?

Yes exactly Cap'n Web for RPC. On top of that: 1. Constrained SQL DSL that limits expressiveness along defined data boundaries 2. Constrained evaluation -- can only compose capabilities (references, not raw data) to get data flow tracking for free

It must be performing a man-in-the-middle for HTTPS requests. That makes it more difficult to do things like certificate pinning.

We had this same challenge in our own app builder, we ended up creating an internal LLM proxy with per-sandbox virtual keys (which the proxy maps to the real key + calculates per-sandbox usage), so even if the sandbox leaks its key it doesn't impact anything else.

@deno team, how do secrets work for things like connecting to DBs over a tcp connection? The header find+replace won't work there, I assume. Is the plan to add some sort of vault capability?

I was just about to say the same thing. Cool technique.

This is an old trick that people do with Envoy all the time.

Dagger has a similar feature: https://docs.dagger.io/getting-started/types/secret/

Same idea with more languages on OCI. I believe they have something even better in the works, that bundles a bunch of things you want in an "env" and lets you pass that around as a single "pointer"

I use this here, which eventually becomes the sandbox my agent operates in: https://github.com/hofstadter-io/hof/blob/_next/.veg/contain...


It’s pretty neat.

Had some previous discussion that may be interesting on https://news.ycombinator.com/item?id=46595393


I don’t quite get how it’s being injected in https requests… do they inject their own https cert?

I like this, but the project mentioned in the launch post

> via an outbound proxy similar to coder/httpjail

looks like AI slop ware :( I hope they didn't actually run it.


We run or own infrastructure for this (and everything else). The link was just an illustrative example



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: