Proof of Stake: How I Learned to Love Weak Subjectivity
- Transfer
A lot of articles have been written about PoW mining, which discussed all the nuances, but PoS, which appeared in 2011, is still a mystery to many. Now there is a certain tendency to mix these two methods, so that they compensate for each other's shortcomings. But materials in Russian about this are clearly not enough, so we at Hashflare decided to make this translation.
The “proof of stake” problem still causes the most fierce debate in the world of cryptocurrencies. Despite the fact that this idea has many undeniable advantages, including efficiency, a greater safety margin and the ability to withstand centralization problems associated with hardware, proof of stake algorithms are more complex than alternative methods based on proof of work "). Skepticism is also caused by the efficiency of “proof of stake”, especially when it comes to such a global, in the general opinion, problem as “nothing at stake”. Nevertheless, as it turned out, the problems are not at all hopeless, and you can even justify the successful use of the promising “proof of stake” algorithm, while its cost will be moderate.

Let's start with the introduction. In general, the goal of the consensus algorithm is to ensure a safe state update according to certain rules of state transition, when the right to perform the transition / state transition is distributed among a cost-effective circle. A cost-effective group (set) is a circle of users who can obtain the right to carry out collective transactions through some algorithm. An important property of this circle is that the economic group used to achieve consensus must be reliably decentralized - this means that no participant, or a group of participants who secretly conspired, can get the overwhelming majority in the group, even if the participant there is a large amount of funds in the account and there is material interest.
Recall that recently, attempts have been made to develop consensus algorithms based on the theory of the traditional problem of Byzantine generals ; nevertheless, all these approaches are based on the M-of-N defense model, and the concept of “Tasks of the Byzantine Generals” itself still did not answer the question of which set to choose N. In most cases, the set used are stakeholders ( major players), so we will consider these new BFT paradigms just smarter proof of stake subcategories.
“Proof of work” has a feature that greatly facilitates the development of effective algorithms for it: to participate in an economic network, resource consumption outside the system is necessary. This means that the miner, when making its contribution to the blockchain, must choose for which of all possible forks it will mine (or is it better to try to mine a new chain), while different options are mutually exclusive. Double voting, including double voting, when the second vote is cast many years after the first, is unprofitable, as it forces the user to distribute mining performance between different votes; therefore, the strategy for distributing mining power exclusively to the chain that the user considers promising will always be dominant.

However, in the case of Proof of stake, things are different. Despite the fact that inclusion in the economic network can be expensive (although, as we will see later, not always), voting is free. This means that “naive proof of stake” algorithms that simply try to copy the “proof of work” by creating each coin using the “simulated mining rig” method : once per second each account has a certain chance to generate a valid block, they have a fatal error: if there are many forks, it is best to vote for all forks at once. This is the essence of “nothing at stake”.

Pay attention to the proof that it may not be appropriate for the user to vote for one fork in the “proof-of-stake” environment: the superiority of altruists. Altruistically directed activity is a combination of true altruism (on the part of users or software developers ), which is expressed in the form of concern for the welfare of other users and the network, and moral and psychological rejection of actions that may cause obvious evil to anyone (double voting), and also “false altruism”, which takes place due to the fact that the owners of the coins do not want to put up with the decline in the value of their coins.
Unfortunately, one cannot rely on “true altruism” alone, since the value of coins, due to the integrity of the protocol, is publicand, therefore, it will not receive proper support (for example, if the activity of each of 1000 has a 1% chance of becoming decisive for the success of the attack, as a result of which the coin value drops to zero, then each stakeholder will receive a reward equal to only 1% of what he has). In the case of distribution equivalent to the Ethereum genesis block, depending on how you evaluate the probability of each of the users to succeed, the required number of compensations will be in the range from 0.3% to 8.6% of the entire share (or even less if the attack for the currency is not fatal) . However, algorithm designers should not forget the concept of true altruism in order to maximize its benefits if it works properly.
By focusing exclusively on short-range forks — forks lasting less than a certain number of blocks, possibly 3000, you can find a solution to the “nothing at stake” problem: guarantee fees. The user, in order to be eligible for remuneration for voting on the block, must make a guarantee deposit, and if the user is exposed for voting by many forks, then the proof of this transaction will be included in the original chain, taking the reward with him. Consequently, voting for only one fork will again be a profitable strategy.

Another set of strategies called “Slasher 2.0 ″ (unlike Slasher 1.0, the original “proof of stake” algorithm based on the guarantee fee), involves a simple fine on those users who voted for the wrong fork, but not on those who voted twice. Thus, the analysis is greatly simplified, since there is no need to pre-select voters voting for many blocks in advance to prevent probabilistic double voting strategies, although it also has its own price, since users may not want to sign anything at all if there are two alternatives for a block of a given height . If we want users to be able to sign in this case, you can use the variant of the logarithmic rules for calculating the results (for more information, see the link) For the purposes of this article, the properties of Slasher 1.0 and Slasher 2.0 are identical.

The reason why this works only for short-term forks is simple: in the future, the user should have the right to collect the guarantee fee, and after the contribution has been withdrawn, the factor disappearing that does not vote for the long-term fork that was started long ago in the past using these coins . One of the strategies that aims to cope with this problem is to transfer the contribution (deposit) to a constant state, but this strategy has its own shortcomings: with the exception of cases where the cost of the coin continues to grow constantly, constantly attracting new subscribers, the set consensus breaks off, freezing in the form of a certain permanent estate. Given that one of the main ideological difficulties associated with the popularity of cryptocurrencies is that Since centralization demonstrates a tendency to form frozen classes that hold constant power, copying such a class is likely to be unacceptable for most users, at least for those block chains that are aimed at permanence. The estate model can be the exact approach for specialized fleeting block chains for which quick death is envisaged (for example, one can imagine such a block chain existing for a game cycle based on a block chain).
One of the classes of approaches to solving the problem is a combination of the Slasher mechanism described above within the framework of short-term forks, with backup, transactions-as-proof-of-stake, for long-term forks. Basically, the work of TaPoS is to calculate transaction fees within the “account” of a block (requiring that each transaction include several bytes from the hash of the previous block so that transactions are not deliberately translatable). Theoretically, an attacking fork should spend a large amount of money in order to achieve success. Nevertheless, this hybrid approach has a fundamental error: if we assume that the probability of a successful attack is negligible, then each signatory has a motive to offer the service of re-subscribing all their transactions to a new blockchain in exchange for a small fee; therefore, a zero chance of attack is not theoretically stable for the game. Is it a situation where every user will create web applications on node.js to receive compensation, can not become a reality? Even so, there is an easier way to do this: to sell old, no longer used private keys on the black market. At the same time, the “proof of stake” system, even without taking into account black markets, will always be at risk from individual users who initially participated in the pre-sale and have a share in the block left, and which eventually find each other and combine to launch fork. can't become a reality? Even so, there is an easier way to do this: to sell old, no longer used private keys on the black market. At the same time, the “proof of stake” system, even without taking into account black markets, will always be at risk from individual users who initially participated in the pre-sale and have a share in the block left, and which eventually find each other and combine to launch fork. can't become a reality? Even so, there is an easier way to do this: to sell old, no longer used private keys on the black market. At the same time, the “proof of stake” system, even without taking into account black markets, will always be at risk from individual users who initially participated in the pre-sale and have a share in the block left, and which eventually find each other and combine to launch fork.
Given all the above arguments, we can confidently conclude that this threat emanating from an attacker creating a fork in a conditionally random long-range range, unfortunately, is the most significant, and generally non-degenerate implementation of the question does not allow the proof of stake algorithm to work successfully in the “proof of work” security model. Nevertheless, we will be able to circumvent this fundamental obstacle by means of a minor, but nonetheless fundamental change in the security model.
Despite the many ways to classify consensus algorithms, in this article we will focus on the following. First of all, today we will introduce two of the most common paradigms:
All systems that use social networks as their consensus set (for example, Ripple) are subjective without fail; a new node, which knows nothing but the protocol and data, can be convinced by the attacker that 100,000 nodes are trustworthy, and without a reputation, it is impossible to understand that this is an attack. On the other hand, “proof of work” is objective: the current state is always the state that contains the highest expected amount of “proof of work”.
And now, for the sake of "proof of stake", we add a third paradigm:
Within the framework of this model, the excellent work of “proof of stake” is clearly visible: we simply prohibit nodes from returning more than N blocks, and set N by the length of the guarantee fee. In other words, if the state S was valid, and became the ancestor of at least N valid states, then from now on, none of the states S ', which is not a descendant of S, can be valid. Now long-term attacks are not a problem, for the simple reason that we have established that long-term forks are invalid in the definition of a protocol. Obviously, this rule is weakly subjective, with the additional advantage that X = 100% (that is, no attack can lead to a permanent failure, except when it lasts more than N blocks).
Another weakly subjective method of counting isexponential subjective score , defined as follows:

By and large, we definitely fine the later forks. According to the ESS property, in contrast to more primitive approaches to subjectivity, the permanent separation of the network is generally avoided; if the time interval between the first node in the network hearing about block B and the last node in the network hearing about block B is an interval of k blocks, then the fork is unacceptable, unless the fork lengths are kept in the range of about k percent relative each other (in this case, the different own forks weights make sure that half the network will always see one fork as having a larger score, and the other half will support the other fork). Therefore, ESS is weakly subjective with X roughly corresponding to

In general, the “max revert N blocks” rule is better and simpler, but ESS can be justified in situations where users are satisfied with high degrees of subjectivity (that is, N is small) in exchange for a quick climb to very high levels of protection (that is, they are invulnerable to 99% of attacks after N blocks).
So, what could a world look like in which a weakly subjective consensus reigns? First of all, those sites that are constantly online would be in the best position, in those cases weak subjectivity is by definition equivalent to objectivity. Nodes that occasionally appear on the network, or at least once for every N blocks, will also be in order, as they will be able to constantly receive updated network status. However, new nodes entering the network and those nodes that appear on the network after a very long period of time will not have a consensus algorithm that could provide them with reliable protection. Fortunately, for such nodes there is a simple solution: for the first time during registration and every time they stay offline for a very long time, they only need to get the last block hash from a friend, blockchain analysis software or from your software vendor, and paste this hash into your blockchain client as a “check digit”. From this moment, they will be able to update their view of the current state in safe mode.
This security assumption, the idea of “getting a block hash from a friend,” may seem inaccurate to many; Bitcoin developers often argue that if the solution to the long-term attack problem is some alternative to solving the X mechanism, then the security of the blockchain depends entirely on X, and thus the algorithm is actually no more secure than using X directly - implying that most of X, including our approach to social consensus, are unsafe.
However, this logic ignores the reason why consensus algorithms exist at all. Consensus is social progress, and people quite successfully reach agreement on their own without the help of any algorithms. Perhaps the best example isRai stones , with the help of which tribes living on the Yap Islands essentially established and maintained a blockchain by registering the transfer of ownership of stones (using zero value of their own as bitcoin-like property ) as part of collective memory. The reason for the need for consensus algorithms is quite simple, since people do not have infinite computational capabilities, and prefer to rely on software agents to achieve consensus. Clever software agents maintain consensus on infinitely numerous states with extremely complex sets of rules and with a high degree of accuracy, but they also demonstrate a high degree of ignorance in the sense that they have very little social information. The problem of consensus algorithms lies in the need to create an algorithm, the contribution of social information to which tends to the greatest possible minimum.
Weak subjectivity is absolutely the right decision. Thanks to this decision, it is possible to deal with the long-term problems of “proof of stake”, relying on social information that depends on the human factor. In this case, the consensus algorithm gets the role of the one who reduces the time to reach consensus from a few weeks to twelve seconds. Moreover, this solution allows the use of very complex sets of rules and work with fairly large states. The role of consensus due to the human factor refers to ensuring consensus on the hashes of the block over long periods of time, that is, something that people cope fairly well with. A board that is hypothetically powerful enough to oppress and confuse the actual value of a block hash from one year old in the past,
It should be noted that we do not need to configure N; theoretically, we can propose an algorithm that allows users to block their deposits for a time exceeding N blocks, after which users can benefit from their deposits in terms of obtaining a more accurate value of their level of protection. For example, if the user has not logged in since T blocks back, and the duration of 23% of deposits exceeds T, then the user can offer his own subjective counting function, which ignores signatures with more recent deposits, and, therefore, protect itself from attacks up to 11.5 % of the total share. The interest rate growth curve can be used to increase the attractiveness of long-term deposits against short-term ones, or for the sake of simplification we can simply rely on altruism.
Opponents of long-term deposits believe that it is beneficial for the owners of such deposits to keep them in a locked state, which is inefficient. A problem that is absolutely identical to the “proof of work” problem. In contrast to this, however, four points are made.
Firstly, marginal cost is not the full cost, and the coefficient of the total cost divided by the maximum cost is much less for “proof of stake” than for “proof of work”. Users with similar experience, painlessly blocking up to 50% of their funds for several months, users experiencing some difficulties as a result of blocking up to 70%, will consider blocking in excess of 85% unacceptable without a lot of reward. In addition, each user has their own preferences regarding the desired duration of blocking funds. Since both factors add up, regardless of what balanced interest rates lead to, the largest amount of funds will be blocked at a level much lower than marginal costs.

Secondly, the blocking of funds has a price for its owner, but also affects public interests. The presence of blocked funds means that the amount of money in circulation that could be used in transactions has decreased, as well as the fact that the value of the currency will increase, which will lead to a redistribution of funds for everyone else, creating a public benefit.
Thirdly, guarantee fees are a very safe means of saving, that is, (i) they are a substitute for using money as a personal insurance tool in times of crisis, and (ii) many users will be able to borrow in the same currency secured by the guarantee fee.
And finally, since “proof of stake” can actually take deposits for bad behavior, and not just reward, it can provide a higher level of security compared to the level of rewards, while in “proof of work” the level of security can only equal to the level of rewards. The “proof of work” protocol cannot in any way destroy ASICs of miners that behave improperly.
Fortunately, there is a way to verify these assumptions: launch a “proof of stake” coin with a participation fee of 1%, 2%, 3%, and so on, per year, and see what percentage of coins turned into deposits in each case . Users will not act to their own detriment, so we can use the amount of money spent on consensus instead of what level of futility the consensus algorithm has led to; if the “proof of stake” has a sufficient level of protection at a lower level of remuneration than the “proof of work”, then we can be sure that the “proof of stake” is a more efficient consensus mechanism, and we can use the participation levels at different levels of remuneration in order to obtain an accurate idea of the ratio of total cost to maximum cost.
In total, we now know for sure that (i) proof of stake algorithms can be secure, and weak subjectivity is both sufficient and necessary to fundamentally change the security model to circumvent the concerns about nothing-at-stake "To achieve this goal, and (ii) there are significant economic reasons for believing that the cost-effectiveness of the proof of stake algorithm is actually higher than that of the proof of work algorithm. Moreover, the “proof of stake” is not at all unknown; over the past six months spent on formalized descriptions and studies, it was possible to identify the strengths and weaknesses of this algorithm. Now we know about him no less than about “proof of work”, which will probably always be characterized by an abundance of questions on the centralization of mining.
Cloud mining equipment rental :

The “proof of stake” problem still causes the most fierce debate in the world of cryptocurrencies. Despite the fact that this idea has many undeniable advantages, including efficiency, a greater safety margin and the ability to withstand centralization problems associated with hardware, proof of stake algorithms are more complex than alternative methods based on proof of work "). Skepticism is also caused by the efficiency of “proof of stake”, especially when it comes to such a global, in the general opinion, problem as “nothing at stake”. Nevertheless, as it turned out, the problems are not at all hopeless, and you can even justify the successful use of the promising “proof of stake” algorithm, while its cost will be moderate.

Cost Effective Groups and Nothing at Stake
Let's start with the introduction. In general, the goal of the consensus algorithm is to ensure a safe state update according to certain rules of state transition, when the right to perform the transition / state transition is distributed among a cost-effective circle. A cost-effective group (set) is a circle of users who can obtain the right to carry out collective transactions through some algorithm. An important property of this circle is that the economic group used to achieve consensus must be reliably decentralized - this means that no participant, or a group of participants who secretly conspired, can get the overwhelming majority in the group, even if the participant there is a large amount of funds in the account and there is material interest.
- Computing Power Owners: Standard “proof of work” or TaPoW . It should be noted that we are talking about specialized hardware, and (I would hope) about modifications to universal hardware.
- Stakeholders (key players): the whole set of options «proof of stake»
- User Social Network: Ripple / Stellar Consensus
Recall that recently, attempts have been made to develop consensus algorithms based on the theory of the traditional problem of Byzantine generals ; nevertheless, all these approaches are based on the M-of-N defense model, and the concept of “Tasks of the Byzantine Generals” itself still did not answer the question of which set to choose N. In most cases, the set used are stakeholders ( major players), so we will consider these new BFT paradigms just smarter proof of stake subcategories.
“Proof of work” has a feature that greatly facilitates the development of effective algorithms for it: to participate in an economic network, resource consumption outside the system is necessary. This means that the miner, when making its contribution to the blockchain, must choose for which of all possible forks it will mine (or is it better to try to mine a new chain), while different options are mutually exclusive. Double voting, including double voting, when the second vote is cast many years after the first, is unprofitable, as it forces the user to distribute mining performance between different votes; therefore, the strategy for distributing mining power exclusively to the chain that the user considers promising will always be dominant.

However, in the case of Proof of stake, things are different. Despite the fact that inclusion in the economic network can be expensive (although, as we will see later, not always), voting is free. This means that “naive proof of stake” algorithms that simply try to copy the “proof of work” by creating each coin using the “simulated mining rig” method : once per second each account has a certain chance to generate a valid block, they have a fatal error: if there are many forks, it is best to vote for all forks at once. This is the essence of “nothing at stake”.

Pay attention to the proof that it may not be appropriate for the user to vote for one fork in the “proof-of-stake” environment: the superiority of altruists. Altruistically directed activity is a combination of true altruism (on the part of users or software developers ), which is expressed in the form of concern for the welfare of other users and the network, and moral and psychological rejection of actions that may cause obvious evil to anyone (double voting), and also “false altruism”, which takes place due to the fact that the owners of the coins do not want to put up with the decline in the value of their coins.
Unfortunately, one cannot rely on “true altruism” alone, since the value of coins, due to the integrity of the protocol, is publicand, therefore, it will not receive proper support (for example, if the activity of each of 1000 has a 1% chance of becoming decisive for the success of the attack, as a result of which the coin value drops to zero, then each stakeholder will receive a reward equal to only 1% of what he has). In the case of distribution equivalent to the Ethereum genesis block, depending on how you evaluate the probability of each of the users to succeed, the required number of compensations will be in the range from 0.3% to 8.6% of the entire share (or even less if the attack for the currency is not fatal) . However, algorithm designers should not forget the concept of true altruism in order to maximize its benefits if it works properly.
Short term and long term
By focusing exclusively on short-range forks — forks lasting less than a certain number of blocks, possibly 3000, you can find a solution to the “nothing at stake” problem: guarantee fees. The user, in order to be eligible for remuneration for voting on the block, must make a guarantee deposit, and if the user is exposed for voting by many forks, then the proof of this transaction will be included in the original chain, taking the reward with him. Consequently, voting for only one fork will again be a profitable strategy.

Another set of strategies called “Slasher 2.0 ″ (unlike Slasher 1.0, the original “proof of stake” algorithm based on the guarantee fee), involves a simple fine on those users who voted for the wrong fork, but not on those who voted twice. Thus, the analysis is greatly simplified, since there is no need to pre-select voters voting for many blocks in advance to prevent probabilistic double voting strategies, although it also has its own price, since users may not want to sign anything at all if there are two alternatives for a block of a given height . If we want users to be able to sign in this case, you can use the variant of the logarithmic rules for calculating the results (for more information, see the link) For the purposes of this article, the properties of Slasher 1.0 and Slasher 2.0 are identical.

The reason why this works only for short-term forks is simple: in the future, the user should have the right to collect the guarantee fee, and after the contribution has been withdrawn, the factor disappearing that does not vote for the long-term fork that was started long ago in the past using these coins . One of the strategies that aims to cope with this problem is to transfer the contribution (deposit) to a constant state, but this strategy has its own shortcomings: with the exception of cases where the cost of the coin continues to grow constantly, constantly attracting new subscribers, the set consensus breaks off, freezing in the form of a certain permanent estate. Given that one of the main ideological difficulties associated with the popularity of cryptocurrencies is that Since centralization demonstrates a tendency to form frozen classes that hold constant power, copying such a class is likely to be unacceptable for most users, at least for those block chains that are aimed at permanence. The estate model can be the exact approach for specialized fleeting block chains for which quick death is envisaged (for example, one can imagine such a block chain existing for a game cycle based on a block chain).
One of the classes of approaches to solving the problem is a combination of the Slasher mechanism described above within the framework of short-term forks, with backup, transactions-as-proof-of-stake, for long-term forks. Basically, the work of TaPoS is to calculate transaction fees within the “account” of a block (requiring that each transaction include several bytes from the hash of the previous block so that transactions are not deliberately translatable). Theoretically, an attacking fork should spend a large amount of money in order to achieve success. Nevertheless, this hybrid approach has a fundamental error: if we assume that the probability of a successful attack is negligible, then each signatory has a motive to offer the service of re-subscribing all their transactions to a new blockchain in exchange for a small fee; therefore, a zero chance of attack is not theoretically stable for the game. Is it a situation where every user will create web applications on node.js to receive compensation, can not become a reality? Even so, there is an easier way to do this: to sell old, no longer used private keys on the black market. At the same time, the “proof of stake” system, even without taking into account black markets, will always be at risk from individual users who initially participated in the pre-sale and have a share in the block left, and which eventually find each other and combine to launch fork. can't become a reality? Even so, there is an easier way to do this: to sell old, no longer used private keys on the black market. At the same time, the “proof of stake” system, even without taking into account black markets, will always be at risk from individual users who initially participated in the pre-sale and have a share in the block left, and which eventually find each other and combine to launch fork. can't become a reality? Even so, there is an easier way to do this: to sell old, no longer used private keys on the black market. At the same time, the “proof of stake” system, even without taking into account black markets, will always be at risk from individual users who initially participated in the pre-sale and have a share in the block left, and which eventually find each other and combine to launch fork.
Given all the above arguments, we can confidently conclude that this threat emanating from an attacker creating a fork in a conditionally random long-range range, unfortunately, is the most significant, and generally non-degenerate implementation of the question does not allow the proof of stake algorithm to work successfully in the “proof of work” security model. Nevertheless, we will be able to circumvent this fundamental obstacle by means of a minor, but nonetheless fundamental change in the security model.
Weak subjectivity
Despite the many ways to classify consensus algorithms, in this article we will focus on the following. First of all, today we will introduce two of the most common paradigms:
- Objectivity: a new node appearing on a network that does not know anything except (i) a protocol definition and (ii) a set of all blocks and other “important” messages that have been published, can independently arrive at the same conclusion on the current state as and the rest of the network.
- Subjectivity: the system has stable states when different nodes come to different conclusions, and a large amount of social information (that is, reputation) is required for participation.
All systems that use social networks as their consensus set (for example, Ripple) are subjective without fail; a new node, which knows nothing but the protocol and data, can be convinced by the attacker that 100,000 nodes are trustworthy, and without a reputation, it is impossible to understand that this is an attack. On the other hand, “proof of work” is objective: the current state is always the state that contains the highest expected amount of “proof of work”.
And now, for the sake of "proof of stake", we add a third paradigm:
- Weakly subjective: a new node appearing on a network that does not know anything except (i) a protocol definition, (ii) a set of all blocks and other “important” messages that have been published, and (iii) a state starting on less than N blocks into the past, which is known to be valid, it can independently come to exactly the same conclusion on the current state as the rest of the network, except in cases where there is an attacker who constantly controls more than X percent of the entire consensus set.
Within the framework of this model, the excellent work of “proof of stake” is clearly visible: we simply prohibit nodes from returning more than N blocks, and set N by the length of the guarantee fee. In other words, if the state S was valid, and became the ancestor of at least N valid states, then from now on, none of the states S ', which is not a descendant of S, can be valid. Now long-term attacks are not a problem, for the simple reason that we have established that long-term forks are invalid in the definition of a protocol. Obviously, this rule is weakly subjective, with the additional advantage that X = 100% (that is, no attack can lead to a permanent failure, except when it lasts more than N blocks).
Another weakly subjective method of counting isexponential subjective score , defined as follows:
- Each state S contains a quantitative indicator of the results (“score” and “own weight”)
- score (genesis) = 0, gravity (genesis) = 1
- score (block) = score (block.parent) + weight (block) * gravity (block.parent), where weight (block) is usually 1, although you can also use more advanced weight functions (for example, weight also works well in Bitcoin (block) = block.difficulty)
- If the node sees a new block B 'with B as the parent, then if n is the length of the longest chain of children of B at that time, gravity (B') = gravity (B) * 0.99 ^ n (note that values other than 0.99 can also be used).

| Gravity | Own weight |
| Score | Score |

| Number | room |
| Gravity | Own weight |
| Score | Score |
results
So, what could a world look like in which a weakly subjective consensus reigns? First of all, those sites that are constantly online would be in the best position, in those cases weak subjectivity is by definition equivalent to objectivity. Nodes that occasionally appear on the network, or at least once for every N blocks, will also be in order, as they will be able to constantly receive updated network status. However, new nodes entering the network and those nodes that appear on the network after a very long period of time will not have a consensus algorithm that could provide them with reliable protection. Fortunately, for such nodes there is a simple solution: for the first time during registration and every time they stay offline for a very long time, they only need to get the last block hash from a friend, blockchain analysis software or from your software vendor, and paste this hash into your blockchain client as a “check digit”. From this moment, they will be able to update their view of the current state in safe mode.
This security assumption, the idea of “getting a block hash from a friend,” may seem inaccurate to many; Bitcoin developers often argue that if the solution to the long-term attack problem is some alternative to solving the X mechanism, then the security of the blockchain depends entirely on X, and thus the algorithm is actually no more secure than using X directly - implying that most of X, including our approach to social consensus, are unsafe.
However, this logic ignores the reason why consensus algorithms exist at all. Consensus is social progress, and people quite successfully reach agreement on their own without the help of any algorithms. Perhaps the best example isRai stones , with the help of which tribes living on the Yap Islands essentially established and maintained a blockchain by registering the transfer of ownership of stones (using zero value of their own as bitcoin-like property ) as part of collective memory. The reason for the need for consensus algorithms is quite simple, since people do not have infinite computational capabilities, and prefer to rely on software agents to achieve consensus. Clever software agents maintain consensus on infinitely numerous states with extremely complex sets of rules and with a high degree of accuracy, but they also demonstrate a high degree of ignorance in the sense that they have very little social information. The problem of consensus algorithms lies in the need to create an algorithm, the contribution of social information to which tends to the greatest possible minimum.
Weak subjectivity is absolutely the right decision. Thanks to this decision, it is possible to deal with the long-term problems of “proof of stake”, relying on social information that depends on the human factor. In this case, the consensus algorithm gets the role of the one who reduces the time to reach consensus from a few weeks to twelve seconds. Moreover, this solution allows the use of very complex sets of rules and work with fairly large states. The role of consensus due to the human factor refers to ensuring consensus on the hashes of the block over long periods of time, that is, something that people cope fairly well with. A board that is hypothetically powerful enough to oppress and confuse the actual value of a block hash from one year old in the past,
It should be noted that we do not need to configure N; theoretically, we can propose an algorithm that allows users to block their deposits for a time exceeding N blocks, after which users can benefit from their deposits in terms of obtaining a more accurate value of their level of protection. For example, if the user has not logged in since T blocks back, and the duration of 23% of deposits exceeds T, then the user can offer his own subjective counting function, which ignores signatures with more recent deposits, and, therefore, protect itself from attacks up to 11.5 % of the total share. The interest rate growth curve can be used to increase the attractiveness of long-term deposits against short-term ones, or for the sake of simplification we can simply rely on altruism.
Marginal Cost: Other Objections
Opponents of long-term deposits believe that it is beneficial for the owners of such deposits to keep them in a locked state, which is inefficient. A problem that is absolutely identical to the “proof of work” problem. In contrast to this, however, four points are made.
Firstly, marginal cost is not the full cost, and the coefficient of the total cost divided by the maximum cost is much less for “proof of stake” than for “proof of work”. Users with similar experience, painlessly blocking up to 50% of their funds for several months, users experiencing some difficulties as a result of blocking up to 70%, will consider blocking in excess of 85% unacceptable without a lot of reward. In addition, each user has their own preferences regarding the desired duration of blocking funds. Since both factors add up, regardless of what balanced interest rates lead to, the largest amount of funds will be blocked at a level much lower than marginal costs.

Secondly, the blocking of funds has a price for its owner, but also affects public interests. The presence of blocked funds means that the amount of money in circulation that could be used in transactions has decreased, as well as the fact that the value of the currency will increase, which will lead to a redistribution of funds for everyone else, creating a public benefit.
Thirdly, guarantee fees are a very safe means of saving, that is, (i) they are a substitute for using money as a personal insurance tool in times of crisis, and (ii) many users will be able to borrow in the same currency secured by the guarantee fee.
And finally, since “proof of stake” can actually take deposits for bad behavior, and not just reward, it can provide a higher level of security compared to the level of rewards, while in “proof of work” the level of security can only equal to the level of rewards. The “proof of work” protocol cannot in any way destroy ASICs of miners that behave improperly.
Fortunately, there is a way to verify these assumptions: launch a “proof of stake” coin with a participation fee of 1%, 2%, 3%, and so on, per year, and see what percentage of coins turned into deposits in each case . Users will not act to their own detriment, so we can use the amount of money spent on consensus instead of what level of futility the consensus algorithm has led to; if the “proof of stake” has a sufficient level of protection at a lower level of remuneration than the “proof of work”, then we can be sure that the “proof of stake” is a more efficient consensus mechanism, and we can use the participation levels at different levels of remuneration in order to obtain an accurate idea of the ratio of total cost to maximum cost.
In total, we now know for sure that (i) proof of stake algorithms can be secure, and weak subjectivity is both sufficient and necessary to fundamentally change the security model to circumvent the concerns about nothing-at-stake "To achieve this goal, and (ii) there are significant economic reasons for believing that the cost-effectiveness of the proof of stake algorithm is actually higher than that of the proof of work algorithm. Moreover, the “proof of stake” is not at all unknown; over the past six months spent on formalized descriptions and studies, it was possible to identify the strengths and weaknesses of this algorithm. Now we know about him no less than about “proof of work”, which will probably always be characterized by an abundance of questions on the centralization of mining.
Cloud mining equipment rental :
