Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That depends on whether you think of a contract as an interface or as an implementation.

A contract should present an interface that includes a declaration of its behavior. The declared behavior should be well defined, and if a bug in the implementation is discovered, the contract should be updatable to fix the bug. There could even be futures expressing the probability that a contract will be found to have a bug.

The willingness of participants to use a contract would depend on the chances that something about the interface is bug-prone, untested, etc.

As long as bugs do not result in reversal of money flow, the incentives seem to align properly toward a well-defined approach for declarative contracts.



What you're trying to solve is something very similar to the recursive self improvement problem that MIRI[1] and friends are trying to solve in the sphere of "friendly" artificial intelligence. If something rewrites its own source code, how can you assert invariants that can be relied upon? So going and looking over there at what they've come up with may be fruitful.

[1] https://intelligence.org/


Very interesting! Looking at the papers and going to read them. Any specific papers you'd recommend?


"Tiling agents" would be the most arguably relevant. Provably correct agents that approve the construction of other probably correct agents obeying similar invariants.


Is "probably" there meant to be "provably"? I assume so, but I'm not sure.


Probably.


Thanks! Looks very interesting!


Sorry, not qualified to have an opinion. Opening this question to the floor?


Note that this probably means that designing contracts of this sort of "friendly AI complete" in some sense, and therefore not a good thing to be betting on in the short term.


> A contract should present an interface that includes a declaration of its behavior. The declared behavior should be well defined, and if a bug in the implementation is discovered, the contract should be updatable to fix the bug.

The first part, a "declaration of...[future] behaviour," is basically a normal contract.


Indeed. However in a normal contract, the implementation is specified fairly loosely the laws (and enforcement mechanisms) can change drastically over the life of the contract.

There ought to be a way to have highly vetted primitives. In meatspace legalese, boilerplate words and phrases are the closest we get to this... once a contract (or open source license, etc.) has been through litigation, its vulnerabilities become better known.

If a dispute gets decided the "wrong" way because a few clarifying words were absent, the contract is modified and future deals use the new contract.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: