Audit Competition Proposal for Pool Together V5 Smart Contracts by Hats Finance

Authors

Fav_Truffe (BD of Hats Finance, Twitter)

Summary

This is a proposal for Pool Together to conduct an audit competition for its V5 smart contracts.

Background and Motivation

Hats audit competitions are revolutionizing the world of Web3 security, offering a dynamic, cost-effective, and time-efficient solution for smart contract auditing. By transforming the traditional auditing approach, they ensure enhanced security through a community-driven process. With audit competitions, you retain full control over your budget, attract top auditing talent, and gain valuable insights from the Web3 community, all while preparing your project for a robust and secure launch.

Hats audit competitions work on a simple yet powerful model ā€” rewarding results, not efforts. You, as a project owner, allocate budgets according to the severity level of potential vulnerabilities. The budget is retained if no flaws are found. Itā€™s a model that ensures you pay only for value added to your project, giving you confidence in your investment.

These competitions typically draw over 300 skilled auditors who partake in a race against time, diligently hunting for bugs to ensure your projectā€™s safety. The model operates on a first-come, first-served basis, thus encouraging quick and quality submissions. Each successful auditor is rewarded for their findings, fostering a competitive environment that brings out the best in auditors.

In addition, the evaluation process is designed for efficiency. With rewards given to the first submitter, duplicate submissions are avoided. This not only streamlines the process but also saves valuable time.

Hats audit competition mechanism is unique and no one in the security ecosystem offers a better approach, by time and budget, than Hats audit competition product.

Hats Finance started to offer the audit competition product to its partners in February and many audit competitions have been instrumental in demonstrating the efficiency of our product since then. See the table below for reference:

Project Audited by Total Bounty ($) Paid ($) Findings
VMEX Finance yAcademy 67.5k 45k 2 high 9 low 2 gas saving
Raft Finance Trail of Bits 80k 64k 3 high 4 medium 11 low 1 gas saving
Gravita Protocol Solidity & Omniscia 105k 30k 3 medium 11 low
Lodestar Finance Solidity 30k 14.1k 18 medium 2 gas saving
Fuji Finance NA 30k 30k 3 high 6 medium 21 low 2 gas saving
Hats Finance Zokyo & Hexen & G0 Group 40k 31k 1 high 6 low

Briefly; we have created a no-brainer audit competition product for projects to do before launch because there is no upfront fee or additional cost and 100% payment by results. Imagine that ProjectX conducts an audit competition with a bounty of $50k on Hats Protocol and allocates $30k for high severity, $18k for medium severity, $1k for low severity and $1k for gas optimization, respectively. Letā€™s explore the options:

  1. No valid submission: ProjectX does not do any payments and walk away with $50k
  2. Only low severity findings: ProjectX only pays $1k, allocated for low severity, and withdraws the remaining $49k
  3. Only low and medium severity findings: ProjectX pays $19k and withdraws the remaining $31k.

Projects can also put a cap on each high severity finding. For example, if a project allocates $60k for high severity and caps each high severity finding with $15k, there have to be at least 4 high severity findings to bounty out all the amount allocated for high severity ($60k).

Additional Advantages of the Audit Competition on Hats Protocol

  • 100% payment by results
  • Hats Finance is B2B free (Hats Finance takes 10% from the payout and therefore there is no additional cost for Pool Together)
  • Pool Together can easily set up an audit competition with a 7 days notice
  • Pool Together will get the vulnerability submissions in real time and can start fixing the issues in the process
  • Pool Together can attract the wider Web3 security community to get involved with Pool Together V5 with the audit competition
  • Pool Together will align with the essence of Web3 by deploying an on-chain audit competition

Proposal

  • For: Conduct a 10-14 days long audit competition on Hats protocol
  • Against: Do nothing
2 Likes

Hey, tx for your proposal @Fav_Truffe :slight_smile: We discussed your proposal in the last community call! The timing is short until the v5 public launch on Oct 19. But hypothetically, wondering if this could be a grant proposal. What do you think @Lonser @gabor @McOso ?

3 Likes

Hello everyone,

Iā€™m happy to see this proposal in motion. Iā€™m Ofir, the Head of Growth at Hats. We had an insightful discussion with Brenerd, which gave us a clear understanding of the forumā€™s mechanism and the grant system.

Considering your launch timeline, we can plan an audit competition from 10/10 to 17/10. This aligns with the deployment of the contracts, ensuring a secure launch. If weā€™ve grasped correctly, there have been code changes post-audit. We believe our unique ā€˜pay-for-resultsā€™ approach, which attracts numerous security experts, will be beneficial.

For a successful execution, weā€™ll need the dev teamā€™s assistance to:

  • Set up the vault and the committee.
  • Classify the submissions.
  • Handle the payouts from the vault.

Itā€™s crucial to highlight:

  • If there are no valid submissions, Hats wonā€™t charge a service fee. Our fee is 20% of the payout, not an additional charge on the budget.
  • Any remaining budget can be withdrawn from the vault after a 7-day period.

Our goal is to support a secure deployment and ensure the final code undergoes thorough review.

Would love to here your thoughts- @brendan, @trmid

Ofir

2 Likes

Hi @sombrero! Nice to meet you.

I do think Hats is taking an interesting approach to auditing. The idea of only paying for actual bugs or exploits is appealing. Itā€™s like security bounties that are time-limited.

I plan to help PoolTogether set up a bounties program with Immunefi, though weā€™ve been so overloaded with other work that itā€™s been delayed.

Weā€™ve conducted two audits so far, with C4 and 0xMacro, and Iā€™ve felt that was sufficient. More audits are always good, although they provide diminishing returns and can be expensive. C4 and Macro both found very similar issues, so I feel confident that we have good coverage. If they found completely different issues then weā€™d need to continue auditing!

I wonā€™t proceed with Hats for this release, though it might be worth a look in the future.

1 Like

Hey! Tx @Brendan for your reply. I believe an audit on the final commit before deployment would have been nice even if I appreciate that timing is short before launch date. All the more that no money is spent from the protocol if there are no new findings.
Also it may bring attention to new hackers before deployment even if most auditors participate in competitions from various organizers such as code4arena or hats.finance.
Finally I still wonder what the grant team thinks of this proposal.

Grants would be fine with considering a funding for a proposal like this, although there is to consider that our focus is on small Grants up to $5k and anything higher would need to go through a Governance Vote.
Also I agree with Brendan that I think that the audits so far seem relatively sufficient already and I think any audit by Hats should only focus High Severity findings and not include Small and Medium ones which we probably wonā€™t be able to fix all anyway in a deployment. Also there still would need to be a developer overseeing the audit asfaik. Also I donā€™t know the Judge process works to decide in which category a finding falls, something to clarify before funding imo.

Thanks for sharing your thoughts.

Having done two audits is a great approach, better than most projects.
Iā€™d like to hop on a call to explain how our system works and highlight how weā€™ve decentralized the process and made it accessible to everyone on-chain. Our philosophy aligns with yours: every project should undergo as many audits as feasible. However, we recognize the budgetary constraints many face. Thatā€™s where our ā€˜pay-for-resultsā€™ model comes into play. It offers the thoroughness of traditional audits, like C4, but at a fraction of the cost and on-chain.

Further, to emphasize the benefits of our platform, Iā€™ve attached a table below. It underscores how collaborating with us is a win-win situation, even in scenarios where auditors might not identify any vulnerabilities.

Let me know if we can connect for a demo,

Thanks

Thanks for your support!

On the topic of severities, while itā€™s entirely possible to focus solely on high-severity competitions, our recommendation would be to include medium-severity ones as well. We can discuss setting a reward cap for both high and medium-severity findings to ensure budgetary control.

The evaluation is conducted by a committee that you establish. We believe that no one understands the code better than its creators. They will be responsible for categorizing the issues and explaining their validity.