Abstract
ECIP-1120's block elasticity feature requires empirical validation to ensure ETC clients can safely process larger blocks. This research benchmarks Core-Geth and Besu performance across various elasticity multipliers (2x through 32x), measuring block processing time, memory consumption, and network propagation characteristics. Results will inform the final ELASTICITY_MULTIPLIER and MAX_GAS_LIMIT parameters.
Research Objectives
- What is the maximum block gas limit each client can process within acceptable time bounds?
- How does block processing time scale with gas used for each client?
- What are the memory requirements for processing max-size blocks?
- How do different hardware configurations affect processing capability?
- What is the uncle rate impact at various gas limits?
Background
ETC Client Landscape
Ethereum Classic is supported by two actively maintained clients:
| Client | Language | Notes |
|---|---|---|
| Core-Geth | Go | Fork of go-ethereum, primary ETC client |
| Besu | Java | Hyperledger project, multi-network support |
Both clients must safely support the chosen elasticity multiplier.
Current Network Parameters
- Current Gas Limit: ~8,000,000 (miner-adjustable)
- Block Time: ~13 seconds average
- Typical Block Utilization: Variable, often < 50%
Processing Constraints
Block processing involves:
- Transaction validation: Signature verification, nonce checking
- EVM execution: State transitions for each transaction
- State updates: Merkle trie modifications
- Block finalization: Header assembly, uncle processing
Each step scales differently with block size.
Methodology
Approach
- Synthetic Workload Generation: Create blocks with controlled gas usage patterns
- Controlled Environment Testing: Measure performance on standardized hardware
- Network Propagation Testing: Measure block propagation in test networks
- Historical Replay: Re-execute historical blocks to validate measurements
Test Hardware Configurations
Testing will cover hardware representative of ETC node operators:
| Tier | CPU | RAM | Storage | Notes |
|---|---|---|---|---|
| Entry | 4 cores @ 2.5GHz | 8 GB | 500GB SSD | Minimum viable |
| Mid-range | 8 cores @ 3.0GHz | 16 GB | 1TB NVMe | Typical operator |
| High-end | 16 cores @ 3.5GHz | 32 GB | 2TB NVMe | Infrastructure provider |
Workload Types
- Simple transfers: Maximum transaction count (21,000 gas each)
- Contract calls: Mixed computation workloads
- State-heavy: Maximum state reads/writes
- Worst-case: Adversarial workloads designed to maximize processing time
Metrics Collected
- Block import time (ms)
- Peak memory usage (MB)
- CPU utilization (%)
- Disk I/O (MB/s)
- Network bytes transmitted
- Time to first peer receipt
Research Plan
Phase 1: Environment Setup
- Configure test hardware at each tier
- Set up Core-Geth test nodes with various configurations
- Set up Besu test nodes with various configurations
- Develop synthetic block generation tooling
- Establish measurement and logging infrastructure
Phase 2: Core-Geth Benchmarking
- Measure baseline performance at current 8M gas limit
- Test processing time at 16M, 32M, 64M, 128M, 256M gas
- Profile memory usage for each gas level
- Identify performance cliffs and bottlenecks
- Test with worst-case adversarial workloads
Phase 3: Besu Benchmarking
- Measure baseline performance at current 8M gas limit
- Test processing time at 16M, 32M, 64M, 128M, 256M gas
- Profile memory usage for each gas level
- Identify performance cliffs and bottlenecks
- Test with worst-case adversarial workloads
Phase 4: Network Propagation
- Set up multi-node test network (20+ nodes)
- Measure propagation time for blocks at each gas level
- Test under various network conditions (latency, packet loss)
- Model expected uncle rate impact
- Validate findings against historical ETC data
Phase 5: Analysis & Reporting
- Compile benchmark results into standardized format
- Generate performance comparison charts
- Identify safe operating parameters for each client
- Document hardware requirements for each elasticity option
- Prepare recommendations for Elasticity Multiplier Selection
Expected Outcomes
- Performance Profiles: Processing time curves for each client × gas level × hardware tier
- Memory Requirements: Peak RAM needed for each configuration
- Safe Limits Report: Maximum gas limit recommendation for each client
- Hardware Guidelines: Updated node operator requirements
- Uncle Rate Model: Expected uncle rate as function of gas limit
Success Criteria
- Block processing time < 2 seconds on mid-range hardware at recommended limit
- Memory usage < 8GB on mid-range hardware at recommended limit
- No client crashes or state corruption during stress testing
- Both clients can process max-size blocks reliably
- Network propagation allows 95% of nodes to receive block before next block
Dependencies
- Core-Geth development team - Access to profiling builds
- Besu development team - Access to profiling builds
- Test infrastructure - Multi-node network setup
- Elasticity Multiplier Selection - Depends on this research
Current Status
Status: TODO
Progress Log
- 2025-11-28: Initial research plan drafted
- Pending: Begin Phase 1 environment setup
Appendix: Benchmark Methodology Details
Block Generation
Synthetic blocks will be generated using controlled transaction mixes:
interface BlockProfile {
targetGas: bigint;
txMix: {
simpleTransfers: number; // % of gas as 21k transfers
contractCalls: number; // % of gas as contract interactions
stateHeavy: number; // % of gas as SLOAD/SSTORE heavy
};
}Measurement Protocol
Each measurement will:
- Start with clean state (freshly synced node)
- Import 100 blocks to warm caches
- Measure next 1000 blocks
- Report: min, max, mean, p50, p95, p99