Contributor Onboarding & Benchmarking

Contributor Onboarding & Benchmarking

TasQ ensures that every contributor joining the network is verified, benchmarked, and performance-ranked before receiving any live workloads. This process guarantees consistent output quality across the distributed compute pool.

Onboarding Process

  1. Account Creation – Contributors register via the TasQ dApp using either Web3 authentication (MetaMask, WalletConnect) or traditional Web2 login.

  2. Identity & Compliance – Optional KYC verification for contributors seeking access to higher-tier payouts or enterprise workloads.

  3. System Compatibility Check – TasQ runs automated scripts to detect hardware specifications, OS compatibility, and security configurations.

  4. Ledger Integration – The contributor’s node is linked to the TasQ Zero-Knowledge Ledger for secure, privacy-preserving task allocation.

Benchmarking Methodology

  • Compute Throughput Test – Measures FLOPS performance using the Linpack benchmark.

  • Task Latency Simulation – Evaluates round-trip execution speed with synthetic workloads.

  • Parallelism Capability – Stress-tests multi-threaded workloads using OpenMP and MPI-based test scripts.

  • Data I/O Test – Assesses read/write performance for tasks requiring high data movement.

Result Classification

Performance scores are stored on-chain and categorized into Node Tiers:

  • Tier 1: Entry-level compute (light workloads)

  • Tier 2: Mid-performance nodes (balanced workloads)

  • Tier 3: High-performance nodes (AI, data-intensive workloads)

Continuous Evaluation

Nodes are re-benchmarked periodically to detect performance degradation, hardware upgrades, or stability issues. Failure to maintain minimum benchmarks triggers temporary suspension until issues are resolved.

Last updated