I see a different problem here: Ethereum and the DAO were not in a mature state to handle this amount of money. For example, there is a limited support for upgrading contracts in Ethereum and the DAO was not reviewed enough to handle hundreds of million dollars.
Also, there are methods to make the software ultra secure using formal models.
I always wondered why there was such a rush to launch the DAO. As opposed to what Ethereum itself did: develop a proof of concept for over a year, then release a beta version and provide bounties for security bugs, all the while collaborating with testers and security researchers to stress the software.
>I always wondered why there was such a rush to launch the DAO.
So Slock.it could raise money, obviously. A common complaint right around when Slock.it launched the DAO was that Slock.it planned to offer the first funding proposal on it but only provided a brief 2-week period for review and debate before the voting started. It had the appearance of an attempt at railroading the crowdfunding process for a quick payoff.
Fortunately cooler heads uncovered the problems and spoke out, putting the brakes on. Now it's a big learning experience for the ~23,000 people who blindly jumped on the hypetrain and put money into a flawed investment vehicle. You'd think the first 7 years of Bitcoin would have taught people a lesson that this technology is risky, but dollar signs in the eyes tend to obscure hindsight I guess.
> ~23,000 people who blindly jumped on the hypetrain
I haven't touched ETH or the DAO personally, but it is unfair to label every investment in them irrational. Over the past 5 years cryptocurrencies have provided multiple opportunities to convert, say, $10k into $100k or even $1M within a year or two; once your networth exceeds a certain amount, risking 1-2% of it on an opportunity like that is arguably wise - i.e. would be worth repeating - even if it does fail spectacularly.
It is certainly true that as long as that reasoning delivers 10-100x+ returns to some of the people some of the time, a giant fount of speculative money will continue to moisten a lot of scammers, con artists, and insufficiently-careful SW developers though.
Indeed an important question that will be decided soon is whether the "hacker" will get to keep the reward pot, or whether the rules of the game will change to deny it.
I am curious if a huge DAO-like organization could avoid being a target solely by keeping very of its wealth liquid at any moment... i.e. what would have happened if 99.99% of the DAO's eth were loaned out... would the hacker have bothered? (assuming it wasn't an inside job)
It's pretty obvious and simple. Because they want to run away with your money NOW. Ethereum had money already so they could survive while waiting to scam you even more.
That depends on whether you think of a contract as an interface or as an implementation.
A contract should present an interface that includes a declaration of its behavior. The declared behavior should be well defined, and if a bug in the implementation is discovered, the contract should be updatable to fix the bug. There could even be futures expressing the probability that a contract will be found to have a bug.
The willingness of participants to use a contract would depend on the chances that something about the interface is bug-prone, untested, etc.
As long as bugs do not result in reversal of money flow, the incentives seem to align properly toward a well-defined approach for declarative contracts.
What you're trying to solve is something very similar to the recursive self improvement problem that MIRI[1] and friends are trying to solve in the sphere of "friendly" artificial intelligence. If something rewrites its own source code, how can you assert invariants that can be relied upon? So going and looking over there at what they've come up with may be fruitful.
"Tiling agents" would be the most arguably relevant. Provably correct agents that approve the construction of other probably correct agents obeying similar invariants.
Note that this probably means that designing contracts of this sort of "friendly AI complete" in some sense, and therefore not a good thing to be betting on in the short term.
> A contract should present an interface that includes a declaration of its behavior. The declared behavior should be well defined, and if a bug in the implementation is discovered, the contract should be updatable to fix the bug.
The first part, a "declaration of...[future] behaviour," is basically a normal contract.
Indeed. However in a normal contract, the implementation is specified fairly loosely the laws (and enforcement mechanisms) can change drastically over the life of the contract.
There ought to be a way to have highly vetted primitives. In meatspace legalese, boilerplate words and phrases are the closest we get to this... once a contract (or open source license, etc.) has been through litigation, its vulnerabilities become better known.
If a dispute gets decided the "wrong" way because a few clarifying words were absent, the contract is modified and future deals use the new contract.
Contracts can be upgraded if they're designed that way. You can have a wrapper contract that just calls out to other contracts, where the addresses of the other contracts are updateable. You can even make the callee run in the context of the caller, so all the data is held at the caller, which calls an external function that manipulates it.
Doing this is a tradeoff. On the one hand it lets you fix bugs and vulnerabilities, on the other your users have to trust you not to abuse your power.
It completely obviates the point, because a contract no longer means what it says, it means what Bob says. So just wire your money to Bob, it's simpler.
Until such time as a College of Hortators gets invented and people start including standard provisions to defer some decisions to them. Yay, the courts have been reintroduced.
Vlad Zamfir, Greg Meredith, and Emin Gun Sirer are working on this for Ethereum's next consensus algo, Casper. Not aware that they've published anything on it yet though.
Also, there are methods to make the software ultra secure using formal models.