Huobi Ventures|Distributed Computing History, Market Outlook

An in-depth look at the next trillion dollar track – combining zero-knowledge proof and distributed computing

History of development

In the beginning, computers could only perform one computing task. With the advent of multi-core and multi-threaded CPUs, a single computer could perform multiple computing tasks.

As the business of large websites increased, the single-server model made it difficult to scale and increased hardware costs. A service-oriented architecture emerged, consisting of multiple servers. It consists of a service registrant, a service provider, and a service consumer, all three of them.

But as business increases and servers are added, the SOA model makes it more difficult to maintainability and scalability of peer-to-peer services. Like the microcomputer principle, the bus model emerges to coordinate the individual service units. The service bus connects all systems together through a hub-like architecture. This component is known as the ESB (Enterprise Service Bus). It acts as an intermediate actor to translate and coordinate the various different formats or standard service protocols.

The REST model of communication, based on application programming interfaces (APIs), then stands out for its simplicity and greater composability. Individual services export their interfaces to the outside world in the form of REST. When a client makes a request via the RESTful API, it passes a representation of the state of the resource to the requestor or endpoint. This information or representation is transmitted via HTTP in one of the following formats: JSON (Javascript Object Representation), HTML, XLT, Python, PHP or plain text. “, it is applicable to all languages and can be read by both humans and machines.

Virtual machines, container technology, and three papers by Google.

2003, GFS: The Google File System

2004, MapReduce: Simplified Data Processing on Large Clusters

2006, Bigtable: A Distributed Storage System for Structured Data

In 2006, Bigtable: A Distributed Storage System for Structured Data, the curtain was raised on distributed systems with distributed file systems, distributed computing, and distributed databases, respectively; Hadoop’s replication of Google’s paper, Spark, which is faster and easier to get started with, and Flink, which satisfies real-time computing.

But in the past, they were all distributed systems, not Peer to peer systems in the full sense of the word. In the Web3 space, the previous software architecture has been completely overturned. Consistency in distributed systems, protection against fraud attacks, protection against dust transaction attacks and a host of other issues pose challenges for decentralised computing frameworks.

The smart contract public chain represented by Ethernet can be abstractly understood as a decentralized computing framework, except that EVM is a virtual machine with a limited instruction set and cannot do the general purpose computing needed for Web2. And on-chain resources are also extremely expensive. Nevertheless, Ethernet is also a breakthrough in the bottleneck of peer-to-peer computing framework, peer-to-peer communication, consistency of calculation results across the network and data consistency, etc.

Market prospects

The reader has learned about the history of distributed computing above, but there will still be  a lot of confusion, and I’ll list potential questions you may have for the reader, as follows.

Why are decentralised computing networks important in terms of business needs? How big is  the overall market? What stage is it at now and how much room is there for the future?  What opportunities are worth looking at? How do you go about making money?

Why is decentralized computing important?

In the original vision of Ether, it was to be the world’s computer. In 2017, after the ICO explosion, it became clear that it was still mainly about asset distribution. But by 2020, Defi summer emerged and a large number of Dapps started to emerge. On-chain data exploded and EVM became increasingly powerless in the face of more and more complex business scenarios. Off-chain extensions are needed in the form of off-chain extensions to implement functions that EVM cannot. Roles like prophecy machines and the like are somehow decentralized computing.

The value of this data requires more sophisticated algorithms to calculate and extract its business value. The value of the data is captured and generated by the computation. This is something that most smart contract platforms are unable to achieve.

Dapp development is now past the point of just completing the 0 to 1 process and now requires more powerful underlying facilities to support it through more complex business scenarios. The entire Web3 has moved past the development of toy applications and will need to face more complex logic and business scenarios in the future.

What is the overall market size?

How to estimate the market size? Estimate by the size of the distributed computing business  in the web2 space? Multiply by the penetration of the web3 market? Add up the valuations of  the corresponding funded projects currently on the market?

We cannot replicate the distributed computing market size of Web2 to Web3 for the following reasons: 1. Distributed computing in Web2 meets most of the demand, and decentralized  computing in Web3 is differentiated to meet the market demand. 2. The future scope of decentralised  computing in Web3 will be global. So we need to be more rigorous to estimate the market size.

The overall budget for the potential Web3 track is calculated by starting from the following points.

Valuation of other projects in the industry that could be included in the market as a benchmark  market value. According to data on the coinmarketcap website, the market cap of projects  in the distributed computing segment already in circulation in the market is $6.7 billion.

The revenue model comes from the design of the token economy model, for example, the current popular token revenue model is that tokens are used as a means of paying fees when trading. Therefore, through the fee income, it can indirectly reflect the prosperity of the ecology and the degree of trading activity. This will ultimately be used as a valuation criterion. Of course, there are other sophisticated models for tokens, such as collateralised mining, or trading pairs, or anchored assets for algorithmic stablecoins. So the valuation model of the Web3 project, as distinct from the traditional stock market, is more like a national currency. There will be a diversity of scenarios in which tokens can be adopted. So for a specific project specific analysis. We can try to explore how the token model should be designed in the Web3 decentralised computing scenario. First let’s assume that we go to design a decentralised computing framework ourselves, what kind of challenges do we encounter? a). Because of the fully decentralised network, the execution of computational tasks in such an untrustworthy environment requires incentives for resource providers to guarantee online rates and also quality of service. In terms of gaming mechanisms, it is necessary to ensure that the incentives are reasonable, and also how to prevent attackers from launching fraudulent attacks, witch attacks, and other means of attack. Therefore, tokens are needed as a pledge to participate in the POS consensus network and to guarantee the consensus consistency of all nodes first. For resource contributors, the amount of work they contribute is needed to implement certain incentive mechanisms. token incentives for business increase and network efficiency improvement, there must be positive circular growth. b). Compared to other layer1, the network itself generates a large number of transactions, and it is a market-proven token model to pay a fee per transaction in the face of a large number of dust transactions. c). If the tokens are only practically usable, the market capitalisation is difficult to expand further. If used as an anchor asset for a portfolio of assets, several layers of asset nested portfolios greatly expand the effect of financialisation. Overall valuation = Pledge rate * Gas consumption rate * (inverse of liquidity) * Individual price.

What stage are you at now and how much roo is there for the future?

Between 2017 and now, many teams have attempted to move in the direction of decentralised computing, but all have tried and failed, and the reasons for failure will be explained specifically later. The exploration path has evolved from an initial project similar to the Alien Exploration Project, to a model that mimics traditional cloud computing, and then to the exploration of the Web3 native model.

The current state of the entire track is at a point where the 0 to 1 breakthrough has been verified at the academic level, and some large scale projects have made greater progress in terms of engineering practice. For example, the current zkRollup and zkEVM implementations are at the stage of just releasing products.

There is still a lot of room for the future for the following reasons: 1, there is still a need to improve the efficiency of verification computing. 2, there is still a need to supplement and enrich more instruction sets. 3, the optimisation of really different business scenarios. 4, business scenarios that could not be achieved with smart contracts in the past can be achieved through decentralised computing.

Let’s explain through a concrete case of a completely decentralised game. Most Gamefi currently requires a centralised service as the back-end, whose back-end role is to manage the player’s state data and some business logic, and whose application front-end lies in the user interaction logic and events triggered and passed to the back-end. There is currently no complete solution on the market that can support Gamefi’s business scenario. But a verifiable decentralised computing protocol has emerged, with the back-end replaced by zkvm. a truly decentralised game can be realised. The front-end sends user event logic to zkvm to execute the relevant business logic, which is verified and then records the state in a decentralised database.

Of course this is only one of the proposed application scenarios, and Web2 has many business scenarios that require the ability to compute.

What opportunities are worth looking at? How do I go about making money?

Application Scenarios \ Virtual Machine TypesGeneral purpose high performance virtual machinesAI Virtual Machine
DeFiFinancial data processing/high concurrent financial transaction parallel execution confirmation/high performance aggregation….Face recognition / credit assessment / for more lending operations….
NFTSaveGas Aggregated Transactions/Bulk Registration TasksAIGC/Value Estimation
Social ApplicationsMigrating Web2 social applications / high concurrency and high parallelism business scenarios / basic social business logic that was previously not possible with smart contractsRecommendation algorithms….
gamefiDecentralised gaming / multiplayer online network communication processing…Anti-cheating
creator economyFor content creation platforms of different file types, virtual machines that need to support the execution of their business logicAIGC

Attempts at decentralised distributed computing

Cloud service model

The current Ethernet has the following problems : Low overall throughput. A lot of computing power is consumed, but the throughput  is only equivalent to a smartphone.

Low verification positivity. This problem is called Verifier’s Dilemma, where the nodes  that get the right to pack are rewarded, while all other nodes need to verify but are not  rewarded and have a low incentive to verify. Over time, this can lead to unverified computations, which can pose a risk to the security of data on the chain.

Computation is limited (gasLimit) and computation is expensive.

Some teams have experimented with the cloud computing model, which is widely adopted by Web2.  The user pays a fee, which is calculated based on the time spent on computing resources.  The fundamental reason for adopting such a model was that it was impossible to verify that the  computational tasks were being performed correctly, only through detectable time parameters or  other controllable parameters.

Ultimately this model is not widely used and does not take into account the human element.  Large amounts of resources were used for mining for maximum profit. The result is that fewer resources are actually available to be exploited. This is the result of the various players within  the gaming system seeking to maximise their interests. The end result presented is a complete departure from the original intention at the beginning.

Challenger mode

TrueBit, on the other hand, uses a gaming system to achieve a globally optimal solution to  guarantee that the computational tasks issued are executed correctly.

The core elements of our fast computational framework are.

1, the roles: problem solver, challenger and judge

2. The problem solver needs to pledge money in order to participate in receiving computational tasks

3, the challenger, as a bounty hunter, needs to repeatedly verify that the problem solver’s calculations  are consistent with his own local ones

4, the challenger will go to extract the most recent time calculation task that is consistent with both  calculation status, if there is a divergence point, submit the divergence point’s Merck tree hash value

5, and finally the judge will judge whether the challenge is successful or not

However, this model has several drawbacks.

1, the challenger can submit late, and only needs to complete the submission task. This leads to the  result that there is a lack of timeliness.

Using zero-knowledge proof verification calculations

So how to implement it in a way that ensures that the computation process can be verified, but also  guarantees the timeliness of the verification.

An example is the implementation of zkEVM, where each block time, a verifiable zkProof needs to be  submitted. this zkProof contains the bytecode generated by the logical computational business code,  which in turn is executed to generate the circuit code. This achieves that the computational business  logic is executed correctly and the timeliness of the verification is guaranteed by a short and fixed time.

Although zkEVM is only for smart contract execution scenarios, it is still essentially under the  computational business framework. If we were to extend the EVM logic to other generic types of VMs,  such as WASM VMs, or the more generic LLVM high performance VMs. Of course there are many  challenges in  implementing this into engineering practice, but it gives us more scope for exploration.

Under the assumptions that there is enough high-performance zero-knowledge proof acceleration hardware and enough optimised zero-knowledge proof algorithms, the general-purpose computing scenario can be fully developed. A large number of computational operations in  the Web2 scenario can be replicated by zero-knowledge proof general-purpose virtual machines. As mentioned above, this is a profitable business direction.

The combination of zero-knowledge proofs and distributed computing

     Academic level

Let’s look back at the historical evolution of zero-knowledge proof algorithms

GMR85 was the first algorithm to originate from a paper by Goldwasser, Micali and Rackoff: The Knowledge Complexity of Interactive Proof Systems (GMR85), which was proposed in 1985 and published in 1989. The paper focuses on how much knowledge needs to be exchanged in an interactive system after K rounds of interaction in order to prove a statement is correct.

Yao’s Garbled Circuit (GC) [89]. A well-known two-party secure computational protocol based on inadvertent transfer, which is capable of evaluating any function. The central idea of the obfuscated circuit is to decompose the computational circuit (which we can use to perform any arithmetic operation with, or on, a circuit, or a non-circuit) into a generation phase and a solving phase. Each party is responsible for one phase, and in each phase the circuit is encrypted so that neither party can obtain information from the other, but they can still obtain the result based on the circuit. The obfuscation circuit consists of an inadvertent transmission protocol and a packet cipher. The complexity of the circuit grows at least linearly with the input content. After the publication of obfuscated circuits, Goldreich-Micali-Wigderson (GMW) [91] extended the use of obfuscated circuits to multiple parties to resist malicious adversaries.

The sigma protocol is also known as a (special) zero-knowledge proof for honest verifiers. That is, the verifier is assumed to be honest. This example is similar to the Schnorr authentication protocol, except that the latter usually uses a non-interactive approach.

Pinocchio 2013 (PGHR13): Pinocchio: Nearly Practical Verifiable Computation, which compresses proof and verification times to the point of applicability, is also the base protocol used by Zcash.

Groth16 in 2016: On the Size of Pairing-based Non-interactive Arguments, which streamlines the size of proofs and improves verification efficiency, and is the most used ZK base algorithm today.

Bulletproofs (BBBPWM17) Bulletproofs: Short Proofs for Confidential Transactions and More, 2017, presents the Bulletproof algorithm, very short non-interactive zero-knowledge proofs that do not require a plausible setting, 6 months It was later applied to Monero, a very fast combination of theory to application.

The 2018 zk-STARKs (BBHR18) Scalable, transparent, and post-quantum secure computational integrity, which proposes a protocol for the ZK-STARK algorithm that does not require a trusted setup, is another interesting direction in the development of ZK. This has led to the creation of StarkWare, one of the most important ZK projects.

Bulletproofs are characterised by

1) Short NIZK without the need for a trusted setup

2) Built on the Pedersen commitment

3) Support for proof aggregation

4) Prover time is: O ( N ⋅ log ( N ) ) O(N\cdot \log(N)) O(N⋅log(N)), about 30 seconds

5) Verifier time is: O ( N ) O(N)O(N), about 1 second

6) proof size is: O ( log ( N ) ) O(\log(N)) O(log(N)), about 1.3KB

7) The security assumption based on: discrete log

The scenarios to which Bulletproofs applies are

1) range proofs (only approx. 600 bytes required)

2) inner product proofs

3) intermediary checks in MPC protocols

4) aggregated and distributed (with many private inputs) proofs

The main features of Halo2 are

1) Efficient combination of accumulation scheme and PLONKish arithmetization without the need for a trusted setup.

2) Based on the IPA commitment scheme.

3) A thriving developer ecosystem.

4) Prover time is: O ( N ∗ log N ) O(N*\log N)O(N∗logN).

5) Verifier time is: O ( 1 ) > O(1) > O(1) > Groth16.

6) Proof size is: O ( log N ) O(\log N)O(logN).

7) The security assumption based on is: discrete log.

The scenarios for which Halo2 is suitable are

1) arbitrary verifiable computation

2) recursive proof composition

3) circuit-optimized hashing based on lookup-based Sinsemilla function

The scenarios for which Halo2 is not suitable are

1) High validation overhead on Ether unless replacing Halo2 with the KZG version.

The main features of Plonky2 are

1) No need for trusted setup, combining FRI with PLONK.

2) Optimised for processors with SIMD and uses 64 byte Goldilocks fields.

3) Prover time is: O ( log N ) O(\log N)O(logN).

4) Verifier time is: O ( log N ) O(\log N)O(logN).

5) Proof size is: O ( N ∗ log N ) O(N∗\log N)O(N∗logN).

6) The security assumption based on is: collision-resistant hash function.

Scenarios for which Plonky2 is suitable are.

1) Arbitrary verifiable computation.

2) recursive proof composition.

3) Circuit optimisation using custom gates.

Scenarios for which Plonky2 is not suitable are

1) constrained by its non-native arithmetic, it is not suitable for statements containing elliptic curve operations.

Currently Halo2 is the dominant algorithm used by zkvm, supporting recursive proofs  and verifying arbitrary types of computations. The foundation is laid for zero-knowledge proof type virtual machines to do generic computational scenarios.

Engineering Practice Level

Now that zero-knowledge proofs are making rapid progress at the academic level, what is the current state of play when it comes to practical development?

Let’s look at a number of levels.

Programming languages: there are now special programming languages that help developers to lower the development threshold without the need for in-depth knowledge of how the circuit code is designed. There is of course support for translating Solidity into circuit code. Developer friendliness is increasing.

Virtual machines: there are currently several implementations of zkvm, the first is a self-designed programming language that is compiled into circuit code via its own compiler and finally generates zkproof. the second supports the solidity programming language, which is compiled into the target bytecode via LLVM and finally translates into circuit code and zkproof. the third is truly EVM-equivalent compatible, which ultimately Is this the endgame for zkvm? No. Whether it is the expansion into general-purpose computing scenarios beyond smart contract programming, or the complementation and optimisation of different solutions of zkvm for their own underlying instruction sets, it is still the 1 to N stage. There is still a long way to go, and a lot of engineering work needs to be done to optimize and implement. Each has achieved landing on the academic layer to engineering implementation, who can finally become the king and kill a bloody road. Not only do they need to make significant progress in performance improvement, but they also need to be able to attract a large number of developers into the ecosystem. Timing is a very important prerequisite element, being first to market, attracting capital deposits, and spontaneous emergence of applications within the ecology are all elements of success.

Peripheral supporting tools and facilities: editor plug-in support, unit testing plug-ins, Debug debugging tools, etc., to help developers develop zero-knowledge proof applications more efficiently.

Infrastructure for zero-knowledge proof acceleration: since the FFT and MSM take up a lot of computing time in the whole zero-knowledge proof algorithm, parallel computing devices such as GPUs/FPGAs can be used for parallel execution to achieve the effect of compressing time overhead.

Different programming language implementations: e.g. using a more efficient or better performing programming language: Rust.

Star projects emerge: zkSync, Starkware and other quality projects have announced their official product releases one after another. It shows that the combination of zero-knowledge proofs and decentralised computing is no longer at a theoretical level and is maturing in engineering practice.

Bottlenecks encountered and how to solve them

Low efficiency of zkProof generation

We talked earlier about the market volume, the current industry developments, and the actual progress in technology in this segment, but aren’t there a few challenges?

Let’s break down the entire zkProof generation process.

At the stage where the logic circuit is compiled to numerically r1cs, 80% of the operations inside are on computational operations such as NTT and MSM. In addition the hash algorithm is performed on the different layers of the logic circuit, and as more layers are added the hash algorithm time overhead increases linearly. Of course, the GKR algorithm, which reduces the time overhead by a factor of 200, is now being proposed in the industry.

However, the NTT and MSM calculation time overheads are still high. If you want to reduce the waiting time for users and improve the experience, you must accelerate the mathematical implementation, software architecture optimization, GPU/FPGA/ASIC and so on.

The following graph shows the proof generation time and verification time for each zkSnark family algorithm.

Benchmark Results
Sudoku: compile
MidenPlonk: 3 by 3RiscHalo: 3 by 3
1.52ms (✅ 1.00x)99.92 ms (❌ 65.80x slower)1.86 ms  (❌ 1.22x slower)329.15 ms (❌ 216.76x slower)
Sudoku: prove
MidenPlonk: 3 by 3RiscHalo: 3 by 3
477.41ms (✅ 1.00x)100.52 ms (🚀 4.75x faster)1.67 s (❌ 3.49x slower)116.74 ms (🚀 4.09x faster)
Sudoku: verify
MidenPlonk: 3 by 3RiscHalo: 3 by 3
2.41 ms (✅ 1.00x)7.28 ms (❌ 3.02x slower)2.79 ms (❌ 1.15x slower)4.39 ms (❌ 1.82x slower)
Sudoku:
MidenPlonk: 3 by 3RiscHalo: 3 by 3
475.69 ms (✅ 1.00x)205.22 ms (🚀 2.32x faster)1.67 s (❌ 3.52x slower)450.98 ms (✅ 1.05x faster)
fibonacci: compile
Miden: iter-93Miden: fixed-92Miden: fixed-50Risc0: iter-93Risc0: iter-50Risc0: fixed-50Risc0: fixed-92
64.89 us (✅ 1.00x)55.92 us (✅ 1.16x faster)45.01 us (✅ 1.44x faster)387.69 us (❌ 5.97x slower)388.34 us (❌ 5.98x slower)391.42 us (❌ 6.03x slower)390.26 us (❌ 6.01x slower)
fibonacci: prove
Miden: iter-93Miden: fixed-92Miden: fixed-50Risc0: iter-93Risc0: iter-50Risc0: fixed-50Risc0: fixed-92
472.51 ms (✅ 1.00x)231.76 ms (🚀 2.04x faster)233.25 ms (🚀 2.03x faster)417.66 ms (✅ 1.13x faster)413.46 ms (✅ 1.14x faster)410.38 ms (✅ 1.15x faster)412.02 ms (✅ 1.15x faster)
fibonacci:verify
Miden: iter-93Miden: fixed-92Miden: fixed-50Risc0: iter-93Risc0: iter-50Risc0: fixed-50Risc0: fixed-92
2.41 ms (✅ 1.00x)2.36 ms (✅ 1.02x faster)2.36 ms (✅ 1.02x faster)2.55 ms (✅ 1.06x slower)2.55 ms (✅ 1.06x slower)2.55 ms (✅ 1.06x slower)2.55 ms (✅ 1.06x slower)
fibonacci:
Miden: iter-93Miden: fixed-92Miden: fixed-50Risc0: iter-93Risc0: iter-50Risc0: fixed-50Risc0: fixed-92
475.43 ms (✅ 1.00x)234.39 ms (🚀 2.03x faster)235.84 ms (🚀 2.02x faster)421.28 ms (✅ 1.13x faster)417.20 ms (✅ 1.14x faster)413.70 ms (✅ 1.15x faster)415.58 ms (✅ 1.14x faster)
fibonacci large: compile
Miden: iter1000Risc0: iter1000
64.91 us (✅ 1.00x)387.43 us (❌ 5.97x slower)
fibonacci large: prove
Miden: iter-1000Risc0: iter-1000
4.07 s (✅ 1.00x)3.39 s (✅ 1.20x faster)
fibonacci large: verify
Miden: iter-1000Risc0: iter-1000
2.66 ms (✅ 1.00x)2.96 ms (✅ 1.11x slower)
fibonacci large:
Miden: iter-1000Risc0: iter-1000
4.07 s (✅ 1.00x)3.40 s (✅ 1.20x faster)
Blake: compile
Risc0: Library-The quick brown fox jumps over the lazy dog
466.84 us (✅ 1.00x)
Blake: prove
Risc0: Library-The quick brown fox jumps over the lazy dog
3.40 s (✅ 1.00x)
Blake: verify
Risc0: Library-The quick brown fox jumps over the lazy dog
4.24 ms (✅ 1.00x)
Blake:
Risc0: Library-The quick brown fox jumps over the lazy dog
3.40 s (✅ 1.00x)
Blake3: compile
Miden: Library-quick brown fox
7.38 ms (✅ 1.00x)
Blake3: prove
Miden: Library-quick brown fox
1.99 s (✅ 1.00x)
Blake3: verify
Miden: Library-quick brown fox
3.12 ms (✅ 1.00x)
Blake3:
Miden: Library-quick brown fox
2.01 s (✅1.00x)

Since we can see the flaws and challenges, it also means that there are deep opportunities.

1, designing chips for specific zkSnark algorithm acceleration or general zkSnark algorithm acceleration. Compared to other types of encryption algorithms, zkSnark generates more temporary files and requires more memory and video memory on the device. Chip start-up projects, which also face significant capital investment, are not necessarily guaranteed to be successful in the end. But once successful, its technical barriers and IP protection will be a moat. Chip project start-ups must have sufficient bargaining power to get the lowest cost channels. As well as in the overall quality control to achieve security.

2, graphics card acceleration of Saas services, the use of graphics cards to do acceleration, is the cost of expenditure less than ASIC design, and the development cycle is also shorter. But software innovation on the long cycle will eventually be eliminated by hardware acceleration.

High hardware resource usage

Having been in contact with a number of zkRollup projects so far, it has ultimately turned out that large RAM and large graphics memory graphics cards are better suited to their use for software acceleration. In Filecoin mining, for example, a large number of idle data encapsulators have become the target devices for the now popular zkRollup projects. In Filecoin mining, during the C2 phase, the generated circuit code files, need to be generated and cached in memory. If the business code logic is very complex, the corresponding generated circuit code size will be very large, and the final presentation will be in the form of large temporary files, especially involving hash operations on the circuit code, which requires AMD CPU instructions to accelerate. Because of the high speed exchange between direct CPU and memory, the efficiency is very high. There is also an NVME SSD involved here, both of which will be accelerated for zkSnark operations. Above we have explored the possibilities for possible acceleration implementations and found that the resource requirements are still very high.

In the future, if we want to popularise zkSnark applications on a large scale, optimisation through different levels is imperative.

Gas consumption costs

We observed that all zkRollup Layer2 needs to pass zkProof to layer1 for validation and storage, as resources on the ETH chain are very expensive, and if it is popularised on a large scale, a large amount of Gas will need to be paid for ETH. ultimately the user bears this cost, and the original intention of the technology development.

So many zkp projects propose, data effective layers, using recursive proofs to compress the submitted zkProof, all of which are aimed at reducing the Gas cost.

Missing instructions for virtual machines

Most current zkvm platforms oriented towards smart contract programming have a large amount of complementary work on the underlying instruction set of zkvm if they are to be more generalized for computing scenarios. For example, the underlying zkvm VM supports libc instructions, instructions for matrix operations, and other more complex computational instructions.

Conclusion

Because smart contract platforms are more asset-oriented programming, the combination of zero-knowledge proofs and decentralised computing presents an opportunity if we want more real business scenarios to be accessible to Web3. We accordingly expect zero-knowledge proofs to become a mainstream track and no longer a niche technology.

By Jinbin Xie