Friday, July 7, 2017

IANAL, but Levetrage is not blackmail

I am not a lawyer, but I can read what people write. See the following article from WaPo. https://www.washingtonpost.com/news/volokh-conspiracy/wp/2017/07/06/it-is-actually-difficult-to-define-blackmail/?utm_term=.3c07ab692063 TL;DR: In order to be blackmail, there has to be a "lack of nexus." If I am in a dispute with you and I threaten to disclose the details of the dispute, there is "nexus" between my threat and what I may be asking for.

Wednesday, June 14, 2017

Longer White Paper

Mixed Strategy Contracts

LEVETRAGE: MIXED STRATEGIES ON THE BLOCKCHAIN

Abstract. We explore innovations available using blockchain technologies which allow two or more parties with conflicting interests to come to cooperative agreements. We offer a brief background on some of the relevant game theory, including the ultimatum game, Nash bargaining solutions, and Rubinstein’s strategic approach. We suggest that with a well-designed smart contract platform, real world solutions that are quite close to their model counterparts in game theory can be attained. We highlight two particular elements that are available on a blockchain: 1) Threat/commitment enforcement, and 2) genuinely mixed strategies. We argue that a smart contract platform built on top of Ethereum is not only feasible but will have significant real world application, which will be of both social and economic value. Finally, we outline the specifications of such a platform.

1. disclaimer

This paper contains statements that may appear to be assertions involving contract law, or other areas of “the law”. Nothing that follows has been vetted by anyone with legal qualifications, and we make no such claims.

2. Introduction

The bargaining problem is often given as follows  [Rub82, pg 97]:

Two individuals have before them several possible contractual agreements. Both have interests in reaching agreement but their interests are not entirely identical. What “will be” the agreed contract, assuming that both parties behave rationally?

This is age-old question has long frustrated game theorists. Without ways to enforce binding threats, negotiation can be unpredictable, and often the outcome is determined by difficult to quantify “negotiation skills”. Solutions to hard problems are by nature evasive, so academics usually restrict themselves to tractable models. In 1953, Nash  [Nas53] studied a model demand game, and gave axiomatic characterizations of the so called Nash Bargaining Solution. Rubinstein’s seminal paper  [Rub82] in 1982 indicated a possible strategic approach towards inducing a solution. Rubinstein’s game (developed further in  [BRW86]) introduced two factors forcing the players to resolve the game: 1) A penalty for time spent negotiating and 2) exogenous probability of negotiation breakdown.

Game theoretical situations, in contrast to real world negotiations, usually allow players to choose from a finite-parameter space of moves. In the real world, there are an unlimited number of moves, and there are an unlimited number of games players can choose to play. In fact, if resolving a situation by playing a game is preferable to not doing so, the first leveraged negotiation should be over which game, out of a set of many possible games, is to be played. If players are bound to the rules of a game, tidy solutions may be found. But no player would agree to play a game whose rules put him at a disadvantage.

Using blockchain technologies, we would like to offer a player tools from game theory. In particular, the player should be able to make the strongest move possible: A move that commits the player to some action in a way that gives limited rational options for the adversary, while minimizing the probability of a disastrous outcome. The best way to accomplish this objective is to use mixed strategies and smart contracts. We hope this will be used to induce solutions that are economically and socially optimal.

Often times in the more serious situations, negotiation is handled by expensive lawyers, or for other reasons, is costly or inefficient. Wars of attrition are inefficient and often lead to an unfair bargain. By setting up a platform, we would like to allow for parts of the negotiation process to move more efficiently, with ready-to-go “moves” that allow a traditionally disadvantaged party to use leverage against larger adversaries, whenever the economics of the situation dictates.

We refer the reader to T.C. Schelling’s classic text,  [Sch60] for a fascinating discussion of how game theory meets the real world. A more modern reference on the theoretical side is given in the monograph  [Mut99], which includes a thorough discussion of Rubinstein type games.

3. Levetrage

When two parties are negotiating over a surplus, there is a well-defined idea in game theory literature of what the agreement “should” be. This is called the Nash Bargaining Solution (NBS). Assuming that the NBS is the result of an efficient market, any agreement reached other than the NBS should be considered a market inefficiency. A market inefficiency is an arbitrage opportunity. In this case, the intermediate currency used is leverage.

So how does one use leverage to arbitrage in this situation? In many situations, achieving fair market outcome requires changing the nature of the negotiation game from an Ultimatum game (see Example 4, below) to a probabilistic Rubinstein type game (see Example 6, below ). With access to smart contracts that execute mixed strategies, this becomes possible through our platform.

To be clear, we don’t intend to by and sell leverage directly, rather, our goal is to give parties the tools to speed the efficient agreement.

3.1. Levetrage as a Platform.

The Levetrage Platform will consist of a set of tools that can be readily accessed by an agent wanting to improve their bargaining position and expedite the bargaining process. To begin, we provide options for making binding threats and commitments. This will require an interface between the blockchain and the real world, but this interface will be much cleaner than traditional (non-smart) contracts. Next, we provide a way to implement verifiable mixed strategies: namely, making binding probabilistic threats. This feature will allow one party to minimize his exposure to an acceptable level while increasing the exposure of the adversary to a sufficiently unpleasant level. Along these lines, we offer tools to enact irrevocable maneuvers that result in negative consequences for both parties, in a way that must be taken seriously. We suggest some specific maneuvers.

3.2. Levetrage as a Service.

Necessarily, the interface will be more complicated than an app. A typical user lacking blockchain savvy should be able to hire a professional at fairly inexpensive rate (compared with say, a lawyer), who would quickly interface with our platform to devise a working strategy that maximizes the end user’s interests. Examples of irrevocable actions include efforts by law firms, public relations or advertising firms. In the final picture, the Levetrage ecosystem will be a reputation based network of oracles, lawyers, actuaries, blockchain engineers, public relations firms and IoT nodes such as billboards.

3.3. Nash tokens.

The platform will be monetized by use of Nash tokens. Nash token holders will receive the proceeds when certain contracts fail. Nash tokens will be burnt in order to perform “moves”. There will be a fixed number of tokens, however, the cost (in tokens) for certain moves will be controlled externally, to compensate for possible deflation. ETH will remain the underlying currency.

4. Motivating Real World Problems

Example 1. Mike creates $1,000,000 in value for corporation X every year, but is paid only $100,000. Because Mike is constrained by a unique skillset and geographical location, X never offers him more compensation.

Example 2. Jane pays $13,000 for a product from manufacturer X, and the product arrives dysfunctional. Jane asks for a refund, X says “Sorry. ” Jane does some calculations and realizes that the totals costs of a lawsuit would probably exceed the benefits in recovering the losses, and decides not to purse such an action.

Example 3. Joe is sued for defamation by his former employer, corporation X, after describing some events that Joe witnessed while employed at X. Joe’s descriptions are accurate, and would eventually be corroborated by convincing evidence. However, Joe’s lawyer warns Joe that the case could take two years and cost $150,000 to resolve. Instead, X offers that Joe can pay X a $10,000 settlement and sign a non-disclosure agreement. Joe accepts this offer, rather than put forth the energy and money to fight this.

What’s common in the above three examples is that one party is leveraging the other party’s rational self-interest. Exploiting the fact that each player has a relatively limited set of moves, X has manipulated the game to a point where the other player’s best strategy is determined. This is called subgame perfection in game theory.

The problem we would like address can be summarized as “inefficiencies in adversarial bargaining”. We assert that if one agent is allowed to cheat, there is inefficiency. Such situations occur frequently in the fabric of our legal and economic systems. If a party with deep pockets squares off in legal dispute with a smaller party, the common assumption is that the advantage is with the larger party: The smaller party is often better advised to avoid a lengthy and expensive legal battle, even if it is a battle that they would most likely win. When a pyrrhic victory is compared with slightly unfair settlement, a rational individual may choose the unfair settlement. The ability of a party to arrive at unfair agreements terms distorts the free market and undercuts its effectiveness. An economically optimal solution should allow both parties equal opportunity to leverage the potential costs to the adversary.

5. Motivating Game Theory Games

Example 4. The Ultimatum Game Player 1 is handed $100,000. He offers to split it (x, 100,000 - x) with player 2. Player 2 accepts this split and walks away with 100,000 - x, or rejects it and both players walk away with none.

If Player 1 offers a split of $80,000, $20,000. A rational Player 2 will probably accept the offer and walk away with $20,000, rather than reject it to spite player 1. In reality, experiments have shown that sometimes Player 2 will spite Player 1 by rejecting a low offer. [RPOFZ91]

For comparison, we point out a less rigid version of the game.

Example 5. The Negotiation Game Player 1 and Player 2 must come to an agreement on how to split $100,000, or each gets none. Either player can walk away from the game at any time. This is the only rule.

Example 6. Rubinstein Type Game, Probabilistic Version Player 1 offers a split, and a probability p. With probability p the game proceeds, and Player 2 is offered to accept the split, or reject it. If Player 2 rejects it, then Player 2 can propose a counter offer, and a probability. At each step the game has some probability of terminating which each player receiving none.

6. Dividing a surplus and the Nash Bargaining Solution

What should the fair arrangement be? Egalitarian principles dictate that the surplus should be shared, and a belief in free market competition suggests this too: unless one party has demonstrated it is better suited to control a resource, sharing a divisible resource promotes competition. The Nash bargaining solution  [Nas53] agrees: the surplus should be split down the middle when the players are on equal footing.

6.1. Axiomatic approach.

Nash  [Nas53] studied the problem of bargaining using an axiomatic approach, and characterized equilibria. Given a set of axioms (which are probably not realistic) Nash determined that the best solution was to find the value x that maximizes the function

(u1(x) - d1)(u2(x )- d2)
(1)

where u1 and u2 are the utilities enjoyed from a particular agreement by Players 1 and 2, respectively, over a set of possible agreements x A, while d1 and d2 are the utilities gained if the negotiation fails. For example, in the ultimatum game, Example 4, the disagreement point d = (d1,d2) is (0,0): That is, if they disagree, each player gets nothing. Nash’s axioms included symmetry: Except for the very important question of who offers first, Example 4 is symmetric between the two players.

Nash’s axioms also included translation invariance. The solution should not depend on where the disagreement points lies on the plane, but where it lies with respect to the other possible agreement points. For example, if both players are charged $50,000 to play the game (making it zero sum), the negotiations should be the same: Each player should be equally averse to losing $50,000 as they are keen to gaining same.

Note however, that if Player 2 could raise his own disagreement value d2, this would alter the location where the maximum occurs in (1). We propose a way to do this in section 7.

In Example 1, the idea of fair market value of Mike’s effort is not so clear, because the market is not stepping in to narrow the surplus. If a competitor to X were to move down the street from X, Mike with his unique skill-set could spark a bidding war, and find his effort is suddenly worth several times more. Even without a competitor, we can view Mike’s situation as an either an Ultimatum Game (Example 4) in the case that X gives Mike a “take-it-or-leave-it“ offer, or as a Negotiation Game (Example 5) in the case that Mike feels empowered to negotiate. We assume that Mike, with his work ethic and skill-set, could earn $100,000 at another position in another industry if he left X. So Mike’s disagreement value is $100,000, while X’s disagreement value would be the maximum of 0 or $1,000,000 minus the cost of finding and training a suitable replacement for Mike, minus any other losses incurred in the process. If we assume X’s disagreement value is 0, then maximizing (1) one concludes that Mike could be making $550,000.

In reality, one would not expect Mike to negotiate such a payraise. In many industries, employees who want a raise are often best advised to walk into their manager’s office with an outside offer. This leads to the inefficient practice of raise-deserving employees applying for jobs with companies at which they are not interested in working.

In comparison, the disputes in the other two examples appear to be not over a surplus, but over negative consequences resulting from litigation. However, if we accept translation invariance of the problem, this is a matter of perspective: In Example 2, one can set the disagreement point to be the costs to both parties after a long and expensive and possibly public lawsuit. To explore this situation further, let’s assume that both parties understand that a legal action will most likely end with Jane being refunded. Both parties also know that it will be easy for lawyers to obfuscate long enough to make the case significantly painful to prosecute. In this particular jurisdiction it is likely that the court may only award Jane a portion of legal fees. So for the sake of making up numbers, Jane can expect to gain the $13,000 difference, but possibly incur $25,000 of legal fees, of which only $10,000 will may be reimbursed. On the other hand, X will expect pay perhaps $30,000 for legal fees, pay a $13,000 reimbursement, $10,000 in Jane’s legal fees, and in addition may lose $50,000 in profits as sales drop due to bad publicity. So Jane stands to lose $2,000, while X stands to lose $100,000. The disagreement point is then (-$2,000,-$100,000) and set of possible agreements A should be the set of partial or complete refunds given to Jane instead of going to trial. Simple calculus determines that the maximum of (1) occurs on the far endpoint: Jane should get her full refund. Intuitively, this is obvious, the incentives of coming to a settlement are clearly much more compelling for X than for Jane. If Jane threatens to sue, and she can be taken credibly, X will immediately make the payment. The problem is that they have no reason to take such a threat credibly . They are certain Jane will not sue; as for her, the decision is a net negative.

6.2. Strategic Approach.

In 1982, by introducing discounting, Rubinstein devised a game that would force the players to come to a solution early on in the negotiation stages. There a two ways to do the discounting: Assume that time is costly to both parties, or assume that there is some probability at each time that the negotiations could fail. In either case, Player 1 should be able to make an optimal offer that will immediately be accepted by Player 2. In some versions of the game, the players do not control probability p that the game will continue. If p << 1, Player 1 will have higher expectation than Player 2, however, this probability will be less than 12. If Player 1 can choose the probability p, then Player 1 should choose is as close to 1 as possible, and then offer 12. It follows that as p 1, the expected outcome is again the Nash Bargaining Solution.

For a satisfying probabilistic explanation, see  [Mut99][Definition 2.2] where one can see that if both players have the ability to choose the probability of breakdown, the Nash solution will be expected.

Principle 1. If players are allowed to determine the probability that the negotiation will break down, then the players should come to a fair agreement very quickly.

Up until recently, it was thought that the above is not a realistic expectation in real world applications. For example, we quote from 2011 discussion of Rubinstein’s game  [AB11]:

The presence of Nature’s choice in that procedure is very appealing. It points out the possibility that a relationship may end after a failure to reach an agreement with some probability. But the fact that this probability is determined by one of the players may not be deemed very realistic.

We attempt to provide partial answers to this in the sequel.

7. creating commitment

Before delving into probabilistic strategies, we explore the basic maneuver of moving the disagreement point by creating commitment. We encourage the reader to see that classic textbook by Schelling for a more verbose and interesting discussion of the various topics discussed here. Paraphrasing Schelling’s definition of commitment:  [Sch60][pg 127], we say that commitment involves maneuvers that leave one in a position so that their only rational option is follow through with a threat. The object of commitment is the credibility of a threat. We will allow Nash to introduce the issues involving threats  [Nas53, pg 130]:

If one considers the process of making a threat, one sees that its elements are as follows: A threatens B by convincing B that if B does not act in compliance with A’s demands, the A will follow a certain policy T. Suppose A and B to be rational beings, it is essential for the success of the threat that A be compelled to carry out his threat T if B fails to comply. Otherwise it will have little meaning. For, in general, to execute the threat will not be something A would want to do, just of itself. The point of this discussion is that we must assume there is an adequate mechanism for forcing players to stick to their threats and demands once made: and one to enforce the bargain, once agreed. Thus we need a sort of umpire, who will enforce contracts of commitments.

What Jane needs in Example 2 is the ability to make binding threats. We propose one here. Jane takes $10,000 in ETH, deposits it to an address and creates the following contract. In 30 days, the contract will read a channel that by default is broadcasting ’Unfulfilled’. If after 30 days, the channel is broadcasting ’Fulfilled’, Jane gets her money back. If not, the ETH will be distributed to holders of Nash coins, lost to Jane forever. The channel is controlled by a well-reputed oracle who Jane has commissioned to toggle the channel only in the condition Jane presents the oracle with either a $13,000 check from X, or, a court copy of a civil action filed against X.

Now, the tables have turned. Jane contacts X, and tells them that she is not merely threatening a lawsuit, but that it is now against her interest to fail to file the lawsuit. She explains to X how it works, and hands them a copy of the arrangement she has made with the oracle. At this point, X is wise to assume that Jane is not bluffing. If X is rational X will pay her immediately.

The game is not completely set, however, as there are other moves that X may be able to make. For example, if they are also blockchain savvy, they may be able to create imaginative responses. Such a response may bind them to not pay, unless $5000 is deposited into an address. If they are able to communicate this to Jane, they have created a way out for Jane, which rationally, she should accept. She could respond with a counter-commitment, and this could go on. The best way for Jane to preclude counter offers is to go off the grid, and leave the oracle to collect the check.

7.1. Comparison with ’classical’ contracts.

Implementing a legal contract accomplishing the above would be more problematic. Technically, the code that receives a broadcast signal and either refunds or distributes Jane’s money is not a legal contract. It’s a piece of code. (If there is a “meeting of the minds” it is between Nash token holders and Jane, but we do not argue that here.) So it doesn’t require all parties to give considerations. Nor is there a question of jurisdiction, or enforceability by a court. It simply happens. The arrangement between Jane and the oracle is quite simple. If Jane were to try and devise the same contract in real life, she would deposit the money in escrow, but in theory, would be able to sue to get the money, arguing in a court that the original arrangement did not constitute an enforceable agreement. The escrow transaction is reversible, and a court may order this. Her adversary, if well advised, would probably be aware of this.

7.2. Possible Conflicts of Interest.

By default, the oracle does nothing. The oracle has a reputation to uphold, so is happy to simply key the correct outcome. However, in theory, the oracle could offer to toggle the channel for a mere $500. This possibility is unfortunately hard to eradicate completely. We hope that the cost to the oracle of losing their reputation is significant enough to avoid this possibility. It is worth pointing out that, in order for this to work, what matters is not the scruples of the oracle, but rather, that X believe the oracle is scrupulous. X must consider the most likely possibility that the oracle is scrupulous and cannot count on Jane and the oracle colluding to change this.

Nonetheless, we would prefer an arrangement that doesn’t offer any room for dishonesty.

7.3. Commitment Can Backfire.

In the example in this section, Jane has no way out. This is by design. However, if her adversary is stubborn, vindictive, not thinking clearly or simply unable to wrap their mind around the concept of a smart contract, Jane may have made a bad decision. Jane does not know ahead of time the probability that this will happen, however, she may be able to estimate it. If the probability that X is going to proceed irrationally is greater than a certain threshold, Jane is wise not to proceed. In the following section we offer a way to minimize the risk when the adversary has a possibility of being irrational, while still taking advantage of available leverage.

8. Mixed Strategies

In game theory, a mixed strategy is a strategy that has a random component. For example, a soccer player making penalty kicks may be a better shot kicking to her left. However, if she always kicks to her left, the goalkeeper will anticipate this. The shooter should devise a plan to kick, say 60% of shots to her left: This will not allow the goalie to anticipate the shot, but still allows her to use her better shot more often than not. The idea goes back to Von Neumann in 1928  [Neu28], in his proof of a famous theorem stating that all N player games enjoy Nash equilibria, provided that players are allowed mixed strategies. Intuitively, by asserting a small probability of a move that will be damaging to the opponent, one can change the opponents strategy entirely, even without committing to the move. By using all probabilities in between 0 and 1, one can solve an optimization problem - maximizing the opponents exposure while minimizing one’s own.

We begin with a slight mix of game theory and its interactions with the real world. Suppose that Players 1 and 2 are playing the Negotiation Game (Example 5), but Player 1 is impatient. Player 1 decides to makes an offer to split the pot down the middle, and then adds that if the offer is not accepted Player 1 will terminate the game and walk away. This threat becomes useful only if Player 2 receives the threat, believes the threat, and is willing to act rationally upon it. However, Player 1 is not 100% confident that Player 2 will respond: The threat may not have been received credibly, or was not received at all. So there is risk to Player 1 that the strategy will backfire. Player 1 wants to minimize the risk of walking away with nothing.

8.1. Jane’s example, again.

Jane would like a way to create a commitment that will hold with some probability. She devises the following. Again, she takes $10,000 and deposits it to an address. She sets a broadcast to “tick-tock.” Each day the contract checks the broadcast. If the channel is broadcasting “tick-tock“ the contract chooses a random number between 1 and 100. If that number is larger than 95, the contract takes the money and deposits it as a non-refundable retainer in an account held by law firm Y. Jane has signed a retainer agreement with Law firm Y. Their agreement is that, as soon as they receive a payment from Jane, they immediately begin filing a civil action against X and will deliver 40 hours of billed work for Jane on this case. Jane presents this arrangement to X. Assuming that commencing the lawsuit will result in losses to both parties as described above, each day, the expected cost to Jane should be 0.05 ×-$2,000 = -$100. This is not a dear price to pay to show X that she is serious. Meanwhile the expected cost to X from this arrangement will be 0.05 ×-$100,000 = -$5,000. This is much more significant, and should be taken seriously. Jane can leave this event to chance for several days or weeks - until she determines that there is no further value in doing this. Of course, in the process she may end up filing a lawsuit. But she can bound the probability of this event from above.

In comparison, consider the following hypothetical discussed by Schelling  [Sch60, pg 196]: A hitchhiker pulls a gun on a driver. The driver responds that he will crash the vehicle, killing both, unless the hitchhiker tosses the gun out the window. The hitchhiker finds this threat irrational and continues to make demands on the driver. So the driver begins to accelerate. At 120 miles per hour, the hitchhiker must realize that the probability of a fatal accident is now very much a positive number.

The hitchhiker situation is similar in some aspects, but economically much different. Indeed, both the hitchhiker and the driver stand to lose their lives. The driver (who may have a respectable job, and a family) likely has more to lose than a hitchhiker who is robbing kind strangers. If the hitchhiker realizes this, the hitchhiker is smart to keep the weapon cocked. The driver is making an irrational decision to drive recklessly and will eventually slow the car. On the other hand, Jane has much less to lose. The company X, which is (in theory) motivated my money, will not be interested in playing this game for long. This the major element that she is leveraging, and why her strategy should work if X is rational.

However, Schelling’s point is summarized “the only way to becoming physically committed to an action is to initiate it”. When Jane posts and allows a contract to be executed, she is doing precisely that.

Schelling describes mixed strategies as a tool “scale down” the threat  [Sch60, pg 178-179]. This accounts for the possibility that a threat may be ignored: The threat was not communicated, the threat was miscommunicated, or the adversary may have a nature which ignores the threat (i.e. the scorpion who stings the frog, ushering in his own watery death), the adversary is playing a “long game” and is trying to create an illusion of irrationality, or other unforeseen reasons. In fact, if one has a reasonable guess at the probability that a threat will fails for unknown reasons, and also has a guess at how high of probability needs to be to form an effective threat, one can compute a range of probabilities that will be useful. Using the calculus in  [Sch60, pg. 180] it may be that the only threats worth making involve probabilities in a small interval.

9. The Hornet’s Nest Piñata

Often in litigation, the party with the smaller bankroll will be advised not to ”kick the hornet’s nest.“ Lawyers tell clients that the adversary is exceeding mean and will “scorch the earth“ when they become angered. For this reason, litigants are often told not to speak to the press or anyone about a particular case, in order to maintain the possibility of a merely mildly uncomfortable settlement. This essentially allows the manipulative party to keep their thumb on the smaller party while grinding out a campaign of attrition.

Companies and individuals are very reluctant to engage a public opinion battle. Lawyers are typically not social media savvy, and would like to keep the fight in their own domain. Thus the adversary can often rely on lawyers to counsel their clients to keep the dispute within the legal system where the lawyers have control. On the other hand, a battle fought in the press or over social media is unpredictable and can be devastating. For this reason, going public can be the ultimate act of kicking the hornet’s nest. This is a clear threat that an individual facing down a bully can make. However, going to the press is irrevocable so the bully can often rest assured: The weaker party knows the consequences of crossing the line and will never do it.

This is where the option of a mixed strategy is most attractive: A litigant can commit to crossing this line, without actually crossing it.

Consider Example 3. Frustrated at being silenced, Joe is tempted to tell his story even louder, but his attorney tells Joe : Pay up and move on, don’t kick the hornet’s nest. We would like offer Joe an option. Joe collects the documents, a list of corroborating witnesses, his own juicy account of the story, and a copy of the SLAPP complaint. Joe places this all in an envelope and contacts a public relations firm A. Joe signs an agreement with A: “If I deposit $5000 in your account, you are to spend $5000 publicizing this story. If I do not deposit $5000 within two months, destroy the envelope.“ Then Joe sets up a smart contract similar to Jane’s, a probability of 5% per day of triggering the event. If the wrong number comes up, the hornet’s nest is irrevocably blown apart. If the lawyers are to be believed, both sides are in for a long and ugly battle, yet a battle that Joe will eventually win and cause heavy losses to X.

We feel that this is the key use case of Levetrage. Going to the press, or even further, hiring a PR firm, is a very damaging escalation, and as such can be one of the most powerful threats. Notice that this can be applied in either of the other two examples: Paying a PR firm to blast the company will cost Jane, but will hurt X much worse, even if neither party involves a court. So instead of a lawsuit, Jane could simply threaten to launch a $2500 publicity campaign against the company with probability p. In Mike’s case, publicly badmouthing his employer would certainly cause them to part ways and cause them both great damage. (Of course, Mike might want to consider how this would appear to any future employer.)

In practice, there are a number of ways to spread damaging information. Renting a billboard is now easy. SEO firms can blanket the internet with social media posts. It is also worth suggesting that the Levetrage network itself can be used to spread a story quickly. By raising the specter of a Levetrage-centered negative publicity campaign, the prominence of the network increases and , the network becomes more valuable. So there is incentive to coin holders all over the world to spread the news and assist whenever a negative publicity campaign is launched.

10. platform summary

To begin, we offer a bare bones specification. The three fundamental components are as follows.

10.1. On Chain Contracts.

Each contract will consist of

  • A list of addresses who can terminate the contract
  • An action to perform upon termination
  • A time duration when the contract is valid
  • An action taken when the the contract times out
  • An irrevocable escalation event .

We comment on these in turn:

  • The address list can be the user who created it, or an oracle, if the end user wants to take the matter out of her hands in order to create commitment.
  • One example of an action upon termination is to return a specified amount of funds to a user who deposited them.
  • Contracts shouldn’t run forever.
  • The expiration can be either a neutral action or an irrevocable action, or could refer to a random event.
  • The irrevocable escalation will typically be depositing ETH into a third account. It is assumed that the end user has already set up a legal contract (see below) such that this deposit triggers a real world action.

10.2. Legal Contracts.

The typical legal contract should be a simple agreement between an end user and a service provider. The contract is signed and agreed upon, but it will be considered executed upon delivery of funds to a certain address. Upon execution, the service provider will provide their service as agreed upon in the contract. This could be publicity, legal services, or any other legal purpose. In the end product, these contracts will be templated and are expected to by enforceable in most jurisdictions.

10.3. Random Events.

A random beacon is called, and from this an event is chosen. Typically this choice can be between a neutral event (do nothing, leave the contract in place), an hawkish escalation event, or a dovish termination type event.

Note that publicly observable randomness is an interesting open problem. Due to the decentralized nature of blockchains, future blocks are extremely hard to predict. The question of whether the hash of a future block can be manipulated has been studied in  [BCG15] and  [PW16] where it is shown that some small influence may be bought for a very high price, but this must be done by bribing some portion of the mining pool. We feel such an attack would be extremely impractical. When designing a contract, it should be easy to require enough randomness to financially disincentivize any such attack. By simply hashing a future block, we should be able convince any observer that the hash will be impossible to predict, at least enough to create the effect of mixed strategies. Another option is to provide an oracle that hashes our secret code with an externally generated (and generally trusted) beacon such as NIST Randomness Beacon  [NIS11]. Again, the only real objective is to refer to a beacon that any adversary will believe is unmalleable.

10.4. Remarks.

The on chain contracts are public. The off chain contracts are not, by default, but an end user may choose to make them public. The end user may also specify that the deposit to the service provider must come from a specified account - otherwise a anonymous prankster could trigger the event himself. This may be something that the user may use as leverage: The possibility that there may be other parties who are willing to finance actions detrimental to the adversary.

11. Advantages of the Blockchain Platform

11.1. Smart Contracts.

Smart contracts are executed without any enforcement or intervention by courts.

11.2. Decentralization.

As we may deal with potential legal situations involving litigious characters, there is safety in being decentralized. If the platform is successful, we will inevitable thwart a vexatious bully in his attempt to abuse the legal system. The typical reaction of a vexatious bully is become even more litigious, but they will find in very difficult to punish an anonymous global network with scorched earth lawsuits. If this were all run as a single entity, it would easy to target with frivolous lawsuits.

Another aspect of decentralization is the each member of the network has its own vested interest in the welfare of the network. This gives a natural way to publicize the network, and can amplify its effects.

11.3. Verifiable Randomness.

In either of the methods to obtain randomness above, the randomness can be trusted and is essentially free.

11.4. Credible Ledger of Past Events.

Each action will involve a Nash token, so will be tracked. Thus, a public record of all outcomes will be available. Researchers will be able to predict and study the effectiveness of different strategies, and this may lead to even more efficient procedures. Accordingly, Levetrage service providers (oracles, publicists, law firms) can gain reputation by developing a track record. Players new to the ecosystem will be easily able to search all past events and outcomes and contact the providers that are best fit.

11.5. Branding.

We hope that the Levetrage network will become a brand name, with first mover recognition. While in theory, someone could write any of the same code into any blockchain, their adversary may not know to take it seriously.

11.6. Ability to add new moves and products.

We plan to provide the basic vehicles for commitments. However, it should be rather easy for other blockchain users to develop layers on top of the platform that are more creative or tailored to their unique situations that may arise. In any case, we won’t be able to stop them.

12. Other Use Cases

The following use cases are more involved but maybe be possible if the network is more developed.

12.1. Ad Hoc Consumer Organization.

Consumers with a common interest (say pricing gouging by a cable provider) can pool payments in order to negotiate fair customer relations.

12.2. Ad Hoc Labor Organization.

In the gig economy, entire industries can form in short periods of time. Corporations can devise agreements that would not be acceptable in traditional company-worker relationships. Mixed strategy contracts that are activated when a certain number of users sign up can force negotiation. This may require some yet-to-be developed enforcement mechanisms.

12.3. Whistleblowing.

Employees may threaten (anonymously) to report (non-anonymously) practices that the employee would like stopped. For example, in many states, the board regulating an industry will not accept anonymous reports. Thus a credible threat can be made anonymously. Remark: We are not lawyers, but have reason to believe that if the activity is criminal in nature, it is not legally permitted to use this as leverage.

12.4. Reputation Based Commitment.

This is a slightly different nature, as it involves repeated games. In many cases, a committing maneuver maybe difficult to implement. However, if one has a reputation to uphold, then failing to follow through results in forfeiting the reputation. The platform could allow a business or individuals to make threats or promises (including probabilistic ones), and then maintain a record of whether these were fulfilled (provided verifying such this is easy to do.) This is similar to the idea of a credit score; Lenders believe you will pay off your loan based on your history of repaying loans. Promises such as “If Trump wins, I move to Canada” can then be used to malign credibility if not followed through.

12.5. Litigation Financing.

Pie-in-the-sky for now, but with minimal imagination, one can see that by combining litigation threats as discussed above with crowd financing, some very interesting scenarios arise. Many of the scenarios are desirable, although with a little imagination, one can conjure up some that aren’t.

13. Other Issues

13.1. Asymmetric Information and Belief.

Game theory is a very tough subject to obtain complete results in, especially when applying these to the real world. This is particularly the case when players have differing ideas and beliefs about the expectations surrounding their conflict. In this case, players can make strong moves either out of desire to bluff or out of legitimate belief that they are offering the best offer. The adversary has no way to determine if an offer is a bluff or not. These sort of infinite horizon games with asymmetric information are hard to predict. If the expectations are far apart, a Rubinstein game can lead to a series of offers which will not be accepted. Nonetheless, it should be possible that some of the tools may be used in a way that will expedite solution reaching.

 

13.2. Repeated Games.

Some players always make an choice that is irrational in the near future, but is motivated by the idea that they will face similar challenges in the future. In some industries, pissing contests are normal. Certain entities may scare off challengers by demonstrating that they are willing to accept losses in order to maintain a tough position in the future. In such cases, there may be no rational move that player can make.

14. Future Efforts and Needs

This project is part coding and larger part network building and research.

14.1. Legal consultation.

We would like to construct both smart contracts and templates for legal contracts that are legally viable in most jurisdictions, and will not expose anyone to any harmful litigation. This requires serious consultation with appropriate legal counsel. The main resource required here is money. We don’t believe we are selling securities, or offering gambling, but we would like to assure the network that we are on the right side of the law.

14.2. Network Building.

We will need to find partners and service providers to get the network started. This will require time and effort by team members

14.3. Development.

Obviously, this is also a critical component. To accelerate to process, we would need hire dedicated developers. If enough funding is available, this could simply mean offering team members enough to “quit their day job.”

References

[AB11]    Nejat Anbarci and John Boyd, Nash demand game and the kalai-smorodinsky solution, Games and Economic Behavior 71 (2011), no. 1, 14–22.

[BCG15]    Joseph Bonneau, Jeremy Clark, and Steven Goldfeder, On bitcoin as a public randomness source, IACR Cryptology ePrint Archive 2015 (2015), 1015.

[BRW86]    Ken Binmore, Ariel Rubinstein, and Asher Wolinsky, The nash bargaining solution in economic modelling, RAND Journal of Economics 17 (1986), no. 2, 176–188.

[Mut99]    Abhinay Muthoo, Bargaining theory with applications, Cambridge University Press, New York, NY, USA, 1999.

[Nas53]    John Nash, Two-person cooperative games, Econometrica 21 (1953), no. 1, 128–140.

[Neu28]    J. von Neumann, Zur theorie der gesellschaftsspiele, Mathematische Annalen 100 (1928), 295–320.

[NIS11]    Nist randomness beacon, https://beacon.nist.gov (2011).

[PW16]    Cécile Pierrot and Benjamin Wesolowski, Malleability of the blockchain’s entropy, IACR Cryptology ePrint Archive 2016 (2016), 370.

[RPOFZ91]    Alvin Roth, Vesna Prasnikar, Masahiro Okuno-Fujiwara, and Shmuel Zamir, Bargaining and market behavior in jerusalem, ljubljana, pittsburgh, and tokyo: An experimental study, American Economic Review 81 (1991), no. 5, 1068–95.

[Rub82]    Ariel Rubinstein, Perfect equilibrium in a bargaining model, Econometrica 50 (1982), no. 1, 97–109.

[Sch60]    T. C. Schelling, The strategy of conflict, Oxford University Press, 1960.

Mixed Metrics, LLC, Eugene, OR

Shorter White Paper

Levetrage

IMPLEMENTING MIXED STRATEGIES ON THE BLOCKCHAIN

Abstract. We explore innovations available using blockchain technologies which allow two or more parties with conflicting interests to come to cooperative agreements. In light of Rubinstein’s strategic approach to bargaining, it is clear that a probabilistic negotiation device would be an effective tool toward reaching the Nash Bargaining Solution. We argue that the blockchain may provide players with such a commitment device. Rational players with access to such a device should, in actual practice, be able to come to an agreement very near the Nash Bargaining Solution, in a short amount of time.

1. Introduction

The bargaining problem is often given as follows  [Rub82, pg 97]:

Two individuals have before them several possible contractual agreements. Both have interests in reaching agreement but their interests are not entirely identical. What “will be” the agreed contract, assuming that both parties behave rationally?

This is a difficult problem. Without ways to enforce binding threats, negotiation can be unpredictable, and often the outcome is determined by factors not easy to quantify within the scope of classical economic discussions. Solutions to hard problems are by nature evasive, so academics usually restrict themselves to tractable models. In 1953, Nash  [Nas53] studied a model demand game, and gave axiomatic characterizations of the so called Nash Bargaining Solution. Rubinstein’s seminal paper  [Rub82] in 1982 indicated a possible strategic approach towards inducing a solution. Rubinstein’s game (developed further in  [BRW86]) introduced two factors forcing the players to resolve the game: 1) A penalty for time spent negotiating and 2) exogenous probability of negotiation breakdown.

Game theoretical situations, in contrast to real world negotiations, usually allow players to choose from a finite-parameter space of moves. In the real world, there are an unlimited number of moves, and there are an unlimited number of games players can choose to play. If one player has the ability to fully commit to a threat, this player can change the nature of the game by limiting the rational options of his opponent.

We argue that the blockchain provides two important features that make a Rubinstein solution practical. First, genuine, publicly observable randomness (cf.  [BCG15]). Second, smart contracts that allow for sums of money to be directed around, based on randomness or on the direction of neutral oracles. With smart contracts, it is now much easier to credibly make threats.

We refer the reader to T.C. Schelling’s classic text,  [Sch60] for a fascinating discussion of how game theory meets the real world. A more modern reference on the theoretical side is given in the monograph  [Mut99], which includes a thorough discussion of Rubinstein type games.

2. Levetrage

When two parties are negotiating over a surplus, there is a well-defined idea in game theory literature of what the agreement “should” be. This is called the Nash Bargaining Solution (NBS). Assuming that the NBS is the result of an efficient market, any agreement reached other than the NBS should be considered a market inefficiency.

We begin by stating our main result here, and then explain its components in the sequel. This is essentially given by arguments in  [Rub82].

Theorem 1. Suppose that two players are negotiating in a symmetric game where they each have access to commitment devices, commitment release devices, and probabilistic commitment devices. Rational players should agree very soon to an agreement very near the Nash Bargaining Solution.

The following corollary follows from the translation invariance assumption of the Nash problem.

Corollary 1. Suppose A has a legal right and ability to cause an event T to happen. If the cost to A of event T is c1 and the cost to B is c2, and if all players are rational and have access to probabilistic commitment devices and a device committing to T, B should pay A a sum of c -c
-221- to agree to not cause T.

3. commitment devices

Paraphrasing Schelling’s definition of commitment:  [Sch60][pg 127], we say that commitment involves maneuvers that leave one in a position so that their only rational option is follow through with a threat. The object of commitment is the credibility of a threat. Nash nicely summarizes the issues involving threats as follows  [Nas53, pg 130]:

If one considers the process of making a threat, one sees that its elements are as follows: A threatens B by convincing B that if B does not act in compliance with A’s demands, then A will follow a certain policy T. Suppose A and B to be rational beings, it is essential for the success of the threat that A be compelled to carry out his threat T if B fails to comply. Otherwise it will have little meaning. For, in general, to execute the threat will not be something A would want to do, just of itself. The point of this discussion is that we must assume there is an adequate mechanism for forcing players to stick to their threats and demands once made: and one to enforce the bargain, once agreed. Thus we need a sort of umpire, who will enforce contracts of commitments.

3.1. Commitment Using the Blockchain.

Smart contracts on the blockchain can provide a simple mechanism for commitment, by using an oracle. Suppose that A makes a threat. A can can always borrow a large amount of money, and tie the funds up in a smart contract on the blockchain. If a trusted oracle determines that the threat has not been followed through, the funds are “burned”. So if two players are bargaining over $100, A can declare that he will lose $100 if he accepts less than $95. Now B has no rational choice but to agree to $95. We assume that the oracle can only see the outcome of the final agreement, and is not corruptible. (In practice, the perceived corruptibility is potentially an issue.)

3.2. Commitment Release.

Taken at face value, the above commitment device leads to a “draw first” game. However, we argue that if one can create a commitment, one should also be able to create a way to release the other side from commitment. For example, if A ties up $100 in a contract, B can counter with a slightly more complex contract: B also ties up $100, and loses his $100 if he makes any agreement, unless, $50 is deposited into a third account that is directly controlled by B. Now A is released from his commitment: He can create a contract that controls $50, sending it to the account controlled by B, if and only if B agrees to the original 95/5 split. The oracle governing A’s original contract can only see that an acceptable 95/5 split has been agreed to, but cannot see that A is contemporaneously giving $50 back to B.

It does not take much imagination to see, that with trustworthy but scope-limited oracles, this game can can go on ad infinitum.

3.3. Advantages of the Blockchain for this type of commitment.

Implementing a legal contract accomplishing the above would be more problematic, we imagine.1 The code that receives a broadcast signal and either refunds or distributes A’s money is not a legal contract. It’s a piece of code, and does not require considerations from multiple parties. Importantly, while a court can rule that a legal contract is not enforceable, courts cannot rule code executable, nor can they reverse the execution of the code once it is executed.

4. Mixed Strategies

If players are always able to make and release commitments as in the previous section, the game is essentially the same as without these devices. What is needed, then, is a threat to make the negotiations fail completely. Provided that a player has a mechanism for causing the negotiation to fail, this player can turn a messy game into a much tidier Rubinstein type game.

In particular, suppose that A knows that commencing irrevocable event T will cause the negotiation to fail, to both players’ detriment. A can make an offer on some block of the blockchain, and then attach to it a probability p. If, after so many blocks, the offer is not accepted, then T happens with probability (1 -p). Now B must not dilly-dally. If the offer is acceptable, it should be accepted. It will not be accepted, only in the situation that B believes that B can achieve a better outcome by continuing negotiation.

At this point, we claim the game is almost symmetric: B could immediately make a counter offer of the same nature, so provided p is near 1 there is little difference in who goes first. Now A should not make p too low: B can always immediately return the same offer the instant (or the next block) that B receives the offer. So little is gained by making p too low. However, there is always a probability that B will not receive the the threat. The only situation in which a low p would give advantage to A would be if A catches B in a situation where B has the ability to immediately accept the offer the instant it is offered, but not the ability to immediately return a counter offer.

4.1. Proof of Theorem 1.

We prove our theorem by arguing that p should be very close to 1 and the offer should be the Nash Bargaining Solution. (Recall that in the simplest symmetric case, the NBS simply splits the pot). We argue by contradiction: First we suppose that there is a strategy for A that has an expectation of larger than 12. A makes the offer determined by this strategy. Now the instant that the offer is received, by symmetry, B believes they can maximize their expectations going forward by making the same counter offer. Since this move by B (who is assumed rational) is determined, the expectation to A is still the expectation that remains. But B’s expectation is at least that of A’s, so both are larger than 12, a contradiction. So 12 is an upper bound for expectations to A. Thus, if A can achieve 12 via strategy S, A should implement S. One such strategy is to begin by offering 12. B also knows that the best B can hope for is 12, so B will accept it if offered. So A should make an offer, but attach with it some probability p < 1. By choosing p slightly less than 1, B should accept as soon as possible, so as to minimize the probability that the game will end in failure.

5. Disagreement Device

In order to implement the game described in the previous section, we needed a credible disagreement device. In the literature, this problem is nontrivial: For example, we quote from a 2011 discussion of Rubinstein’s game  [AB11]:

The presence of Nature’s choice in that procedure is very appealing. It points out the possibility that a relationship may end after a failure to reach an agreement with some probability. But the fact that this probability is determined by one of the players may not be deemed very realistic.

With the randomness present in the blockchain, the probability can be determined by one of the players realistically. What is left is for us to show is that the blockchain can trigger a disagreement event. This means the contract should take an action that the player who created the contract cannot take back, and should be damaging to the other party.

We suggest a very simple form of such a device: The contract takes funds that previously belonged to a player, and deposits these into an account held by a third party. The third party has invoiced the player for services to be delivered as soon as funds are transfered. The player and the third party have signed an agreement making any payments non-refundable.

While there are a number a services that can legally be contracted for, we suggest one in particular: negative publicity.

For example, a player may make an arrangement with a PR firm for $2500 worth of negative publicity against his negotiating adversary. The PR agrees to perform this, pending a non-refundable payment. In many legal disputes, one litigant may have damning information on the adversary that will cause more damage to the adversary than the cost of spreading it. This works in particular if the litigant has a clear legal right to speak loudly and freely. 2 This can even be further automated: A freeway billboard company or internet advertising agency can sell negative advertising, and publish the advertising the instant the payment is received.

5.1. Example.

Suppose a customer is sold a defective product, and the seller refuses to refund the product. The customer can threaten to spend several hundred dollars with a marketing firm publicizing just how horrible the defective product was. If the business stands to lose more in future sales than the cost of the marketing, the business is wise to offer the customer a refund, up to an amount determined by Theorem 1.

6. Randomness on the blockchain

Publicly observable randomness is an interesting open problem. Due to the decentralized nature of blockchains, future blocks are extremely hard to predict, and we think, the uncertainty is enough for the success of a mixed strategy. The question of whether the hash of a future block can be manipulated has been studied in  [BCG15] and  [PW16] where it is shown that some small influence may be bought for a very high price, but this must be done by bribing some portion of the mining pool. We feel such an attack would be extremely impractical. When designing a contract, it should be easy to require enough randomness to financially disincentivize any such attack. By simply hashing a future block, we should be able convince any observer that the hash will be impossible to predict, at least enough to create the effect of mixed strategies. Another option is to use an oracle that hashes an externally generated (and generally trusted) beacon such as NIST Randomness Beacon  [NIS11]. Again, the only real objective is to refer to a beacon that any adversary will believe is unmalleable.

7. Concept Summary

The typical use case is as follows. An individual finds herself in a dispute initiated by another party, who is somewhat of a bully. Both parties have much to lose if the dispute escalates, however, the adversary might have more to lose, in particular because the adversary is in the wrong. Unfortunately for the individual, escalating the dispute comes with costs, so she is averse to that. The bully knows this, and uses this fact against her, continuing to extract concessions, and ignoring all threats. So instead of making a vacuous threat, our protagonist makes a probabilistic threat to expose the bully far and wide, and follows through with the threat, probabilistically. The first act of commitment is to simply let the random contract be executed. By doing this once, she has demonstrated that she can stomach the low probability of a bad outcome, knowing that the bully has considerably more exposure. When she makes the threat again, the bully can no longer rely on her self-interest and must think about his own. If the bully is rational, he will agree to end the dispute on equitable terms.

References

[AB11]    Nejat Anbarci and John Boyd, Nash demand game and the kalai-smorodinsky solution, Games and Economic Behavior 71 (2011), no. 1, 14–22.

[BCG15]    Joseph Bonneau, Jeremy Clark, and Steven Goldfeder, On bitcoin as a public randomness source, IACR Cryptology ePrint Archive 2015 (2015), 1015.

[BRW86]    Ken Binmore, Ariel Rubinstein, and Asher Wolinsky, The nash bargaining solution in economic modelling, RAND Journal of Economics 17 (1986), no. 2, 176–188.

[Mut99]    Abhinay Muthoo, Bargaining theory with applications, Cambridge University Press, New York, NY, USA, 1999.

[Nas53]    John Nash, Two-person cooperative games, Econometrica 21 (1953), no. 1, 128–140.

[NIS11]    Nist randomness beacon, https://beacon.nist.gov (2011).

[PW16]    Cécile Pierrot and Benjamin Wesolowski, Malleability of the blockchain’s entropy, IACR Cryptology ePrint Archive 2016 (2016), 370.

[Rub82]    Ariel Rubinstein, Perfect equilibrium in a bargaining model, Econometrica 50 (1982), no. 1, 97–109.

[Sch60]    T. C. Schelling, The strategy of conflict, Oxford University Press, 1960.

Mixed Metrics, LLC

Monday, June 12, 2017

White papers

While I do have a much wordier white paper that gives more details about the implementation, I'm posting the following one on the arXiv.

(Update: Not posting on the Arxiv - "not authorized" to submit? But I've authored 19 papers... )

I will post the longer one soon.

https://drive.google.com/file/d/0B9s-SVeO-T9WN1Y3SlFnbFZMNzA/view?usp=sharing

Building the network...

We've received some positive feedback from an established full service national PR firm based in Philadelphia. 

Monday, June 5, 2017

PGP confirmation keybase.io/micahwayne

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

I'm posting this to confirm that this is me:   I am the UO math guy with the bio posted below. I have a keybase account that has my public key. 
-----BEGIN PGP SIGNATURE-----
Version: Keybase OpenPGP v2.0.71
Comment: https://keybase.io/crypto

wsFcBAABCgAGBQJZNVz0AAoJENY3dX4RJkjqUmoP/0/xWf0H3Q5sIhNdW4vqV2IY
Qo2YckBXi6e2B9iruKxi4mx8/QO6kjwinhWL0vwVBbjvLdtE1OKE8YvtypXkT/ay
aIQWRLn6wCrG6fes9Cx31c51/9uQ1kjXjDGkXsZR/LX1J994DD4IK2KjxmIGSxU5
2q6cjkToFuqIdSlvb1FU0FzHRTGlRvQftXUzTw9DwN3QO6vRWz7MLNiuou1zv29U
XeIxgA/XmU91JNBB0KvEdPOUvRliNl3wUSIzEX0HS4rXc4AmzCVrMXAaj/p4WgsG
yXOAH8draSEPIacndzF1nSQ0RClNR3FK7hOW6MGfUSZ/KjjWWLvxySb3dzqdsbVr
PDnEQAhqJFECxTxyzoGBJt7mp3Bs9b6loSueazgz+4O+96O/v7FxYhEp2jWXr/Zo
P4C0gjnS6fefpYIbwyebh0OhmSMgS3ms2Ld0rqr7T6/wl2V2uyZZ2gfWQVBs7iif
rkdEZz2v0+7HPCdobWa4mUCsesv5qjNsRnBPcOQE4ZyXpE6yOBzXl5ZykrNWUx82
aDwoMTNpXplAEGswlFnQtPKQomBGLM4yFXGMyLyPdFyb8JqeS4w19sZ7Agq8bWpv
TIsa1Wi8aWEi3xnOVTWrgihS9dkzupus/X8CWJ06Mv1Z7WCPXhNWu2kpOIjWC7HD
nhd4K36SBrDjgWTuWYdn
=Y+in
-----END PGP SIGNATURE-----

Sunday, June 4, 2017

Let me introduce myself....

I'm currently an Assistant Professor of Mathematics at University of Oregon, and also the owner of Mixed Metrics, LLC.

I earned a Ph.D in 2008 for University of Washington before accepting a job as instructor at Princeton. In 2011, I was promoted to Assistant Professor at Princeton.  In 2013 I returned to the Northwest with a tenure track position at University of Oregon.

My research began in nonlinear elliptic equation, but this led me to studying optimal transport and some related equations in economics, as well as other equations in geometric analysis.  At Princeton, I became interested in data science after sitting in a graduate course  and realizing that data science is built largely on geometric analysis.   I also became further interested in economics, partially because I was in the neighborhood of great economic minds (John Nash worked one floor up in Fine Hall for five years) and also as studying optimal transport problems led me into the economics literature.

I first became interested in cryptocurrency when I was an undergraduate.    As a ``discipline" for hacking the Residential Life computers, I was told to write a research report on issues in computer ethics.  I spent the summer of 1997 studying all of the theoretical dilemmas that the new information age now posed.  In the late 90's cryptocurrency was simply a crazy far off idea. So I was thrilled when I heard in 2012 that the idea of a currency had finely been realized.  To be honest, I did not think at the time that it would go beyond Silk Road. I became fascinated again, and invested in the Ethereum crowdsale in 2014.

I've had the idea to implement mixed strategies for several years.  In May 2016, I incorporated Mixed Metrics, LLC.   While I wanted to brand this as a data science consultancy, the word "Mixed" in the title was in the hope that I would someday be able to implement the ideas.   Earlier this year, I found myself at lunch with economists from top universities around the globe, and I decided to explain the idea.   I was encouraged to ``go for it."  I soon after began writing the white paper.