Client Performance Benchmarking

Istora Mandiri
Research TODO

TODO

This article is a placeholder and is subject to change as research continues.

Abstract

ECIP-1120's block elasticity feature requires empirical validation to ensure ETC clients can safely process larger blocks. This research benchmarks Core-Geth and Besu performance across various elasticity multipliers (2x through 32x), measuring block processing time, memory consumption, and network propagation characteristics. Results will inform the final ELASTICITY_MULTIPLIER and MAX_GAS_LIMIT parameters.

Research Objectives

  1. What is the maximum block gas limit each client can process within acceptable time bounds?
  2. How does block processing time scale with gas used for each client?
  3. What are the memory requirements for processing max-size blocks?
  4. How do different hardware configurations affect processing capability?
  5. What is the uncle rate impact at various gas limits?

Background

ETC Client Landscape

Ethereum Classic is supported by two actively maintained clients:

Client Language Notes
Core-Geth Go Fork of go-ethereum, primary ETC client
Besu Java Hyperledger project, multi-network support

Both clients must safely support the chosen elasticity multiplier.

Current Network Parameters

  • Current Gas Limit: ~8,000,000 (miner-adjustable)
  • Block Time: ~13 seconds average
  • Typical Block Utilization: Variable, often < 50%

Processing Constraints

Block processing involves:

  1. Transaction validation: Signature verification, nonce checking
  2. EVM execution: State transitions for each transaction
  3. State updates: Merkle trie modifications
  4. Block finalization: Header assembly, uncle processing

Each step scales differently with block size.

Methodology

Approach

  1. Synthetic Workload Generation: Create blocks with controlled gas usage patterns
  2. Controlled Environment Testing: Measure performance on standardized hardware
  3. Network Propagation Testing: Measure block propagation in test networks
  4. Historical Replay: Re-execute historical blocks to validate measurements

Test Hardware Configurations

Testing will cover hardware representative of ETC node operators:

Tier CPU RAM Storage Notes
Entry 4 cores @ 2.5GHz 8 GB 500GB SSD Minimum viable
Mid-range 8 cores @ 3.0GHz 16 GB 1TB NVMe Typical operator
High-end 16 cores @ 3.5GHz 32 GB 2TB NVMe Infrastructure provider

Workload Types

  1. Simple transfers: Maximum transaction count (21,000 gas each)
  2. Contract calls: Mixed computation workloads
  3. State-heavy: Maximum state reads/writes
  4. Worst-case: Adversarial workloads designed to maximize processing time

Metrics Collected

  • Block import time (ms)
  • Peak memory usage (MB)
  • CPU utilization (%)
  • Disk I/O (MB/s)
  • Network bytes transmitted
  • Time to first peer receipt

Research Plan

Phase 1: Environment Setup

  • Configure test hardware at each tier
  • Set up Core-Geth test nodes with various configurations
  • Set up Besu test nodes with various configurations
  • Develop synthetic block generation tooling
  • Establish measurement and logging infrastructure

Phase 2: Core-Geth Benchmarking

  • Measure baseline performance at current 8M gas limit
  • Test processing time at 16M, 32M, 64M, 128M, 256M gas
  • Profile memory usage for each gas level
  • Identify performance cliffs and bottlenecks
  • Test with worst-case adversarial workloads

Phase 3: Besu Benchmarking

  • Measure baseline performance at current 8M gas limit
  • Test processing time at 16M, 32M, 64M, 128M, 256M gas
  • Profile memory usage for each gas level
  • Identify performance cliffs and bottlenecks
  • Test with worst-case adversarial workloads

Phase 4: Network Propagation

  • Set up multi-node test network (20+ nodes)
  • Measure propagation time for blocks at each gas level
  • Test under various network conditions (latency, packet loss)
  • Model expected uncle rate impact
  • Validate findings against historical ETC data

Phase 5: Analysis & Reporting

  • Compile benchmark results into standardized format
  • Generate performance comparison charts
  • Identify safe operating parameters for each client
  • Document hardware requirements for each elasticity option
  • Prepare recommendations for Elasticity Multiplier Selection

Expected Outcomes

  1. Performance Profiles: Processing time curves for each client × gas level × hardware tier
  2. Memory Requirements: Peak RAM needed for each configuration
  3. Safe Limits Report: Maximum gas limit recommendation for each client
  4. Hardware Guidelines: Updated node operator requirements
  5. Uncle Rate Model: Expected uncle rate as function of gas limit

Success Criteria

  • Block processing time < 2 seconds on mid-range hardware at recommended limit
  • Memory usage < 8GB on mid-range hardware at recommended limit
  • No client crashes or state corruption during stress testing
  • Both clients can process max-size blocks reliably
  • Network propagation allows 95% of nodes to receive block before next block

Dependencies

  • Core-Geth development team - Access to profiling builds
  • Besu development team - Access to profiling builds
  • Test infrastructure - Multi-node network setup
  • Elasticity Multiplier Selection - Depends on this research

Current Status

Status: TODO

Progress Log

  • 2025-11-28: Initial research plan drafted
  • Pending: Begin Phase 1 environment setup

Appendix: Benchmark Methodology Details

Block Generation

Synthetic blocks will be generated using controlled transaction mixes:

interface BlockProfile {
  targetGas: bigint;
  txMix: {
    simpleTransfers: number;  // % of gas as 21k transfers
    contractCalls: number;    // % of gas as contract interactions
    stateHeavy: number;       // % of gas as SLOAD/SSTORE heavy
  };
}

Measurement Protocol

Each measurement will:

  1. Start with clean state (freshly synced node)
  2. Import 100 blocks to warm caches
  3. Measure next 1000 blocks
  4. Report: min, max, mean, p50, p95, p99

References