Sui.

Publicación

Comparte tu conocimiento.

Champ✊🏻.
Sep 21, 2025
P&R expertos

Sharding On-Chain Games in Sui.

What's the recommended strategy for sharding state-heavy on-chain games in Sui to fully leverage parallel execution of Move transactions?

  • Sui
  • Architecture
  • Transaction Processing
2
4
Cuota
Comentarios
.

Respuestas

4
0xF1RTYB00B5.
Sep 27 2025, 12:13

Sharding state-heavy on-chain games on Sui is one of the highest-leverage ways to get real throughput. Below I gave a recommended strategy (architecture + trade-offs), concrete Move patterns you can copy, and an operational checklist (hotspots, migration, testing). I’ll be explicit about why each choice helps parallelism on Sui.


TL;DR — recommended strategy

  1. Shard by ownership (zones/regions/entities): make each game shard an owned object (zone, room, player, item).
  2. Design for disjoint-touch txs: ensure typical player actions touch only their own objects or a small bounded set of shard objects.
  3. Use async CrossMsg objects for cross-shard interactions (prepare/commit pattern).
  4. Keep hot data in small “proxy” objects and do heavy compute off-chain (with proofs or batched on-chain commits).
  5. Mitigate hotspots with dynamic partitioning, economic throttles, and short lock expiries.
  6. Lazy accounting: use accumulators (per-shard) and let players claim/gather off-chain or lazily on-chain.
  7. Test with adversarial/fuzz scenarios and run on-chain monitors.

Why this fits Sui

Sui gains throughput because transactions touching disjoint objects run in parallel. So the goal is: minimize the set of objects each common transaction touches, and when cross-object work is required, make it explicit with short, deterministic protocols that avoid long locks.


Architectural building blocks

1) Owned objects as shard primitives

Make each zone/room/player/item a has key object. Typical actions (move, attack, pick up) should mutate only the player and local zone objects.

Benefits: disjoint objects = parallel execution.

2) Zone/region partitioning (spatial sharding)

Partition the world into zones (grid, hex, or dynamic partitions). Map players/objects to zone objects. Cross-zone moves require a controlled handoff.

3) Cross-shard messaging (async objects)

For actions that need state on multiple shards, create CrossMsg objects that are processed by destination shards. This avoids locking two shards in the same tx.

Flow:

  • Caller creates a Prepared or CrossMsg (locks minimal info).
  • Destination (or a relayer/worker) picks it up, applies, and writes a Commit object.
  • The origin finalizes or rolls back as needed.

4) Proxy objects & caches for hot state

For hot reads (leaderboards, top N), use small proxy objects that hold summary data (top scores) and push heavy updates off-chain or batch on-chain.

5) Lazy settlement / accumulators

Use per-shard accumulators (e.g., acc_reward_per_action) and let users claim lazily. This prevents updating N objects per event.

6) Deterministic conflict rules & short locks

If collisions are possible (two players try to pick the same item), decide deterministic tiebreakers (e.g., tx hash ordering) and use short-lived lock objects that expire.


Move patterns (concrete examples)

Below are minimal, illustrative Move snippets — adapt to your codebase and types.

A — Zone shard and player objects

module game::shards {
  use sui::object::{UID};
  use sui::tx_context::TxContext;
  use std::vector;

  // Zone (shard) object
  struct Zone has key {
    id: UID,
    coords: (i64, i64),
    // small index of object ids local to zone (avoid large vectors)
    local_objects: vector<UID>,
    // accumulator for local rewards / fees
    acc_reward_per_action: u128,
    total_actions: u64,
  }

  // Player as owned object
  struct Player has key {
    id: UID,
    owner: address,
    zone_id: UID, // current zone
    hp: u64,
    inventory: vector<UID>, // item object IDs
    last_action_nonce: u64, // optimistic concurrency guard
  }

  public(entry) fun create_zone(coords: (i64, i64), ctx: &mut TxContext): Zone {
    Zone { id: object::new(ctx), coords, local_objects: vector::empty(), acc_reward_per_action: 0u128, total_actions: 0 }
  }

  public(entry) fun create_player(owner: address, zone: UID, ctx: &mut TxContext): Player {
    Player { id: object::new(ctx), owner, zone_id: zone, hp: 100, inventory: vector::empty(), last_action_nonce: 0 }
  }
}

B — CrossMsg for cross-zone actions (prepare/commit)

module game::crossmsg {
  use sui::object::{UID};
  use sui::tx_context::TxContext;

  // A cross-shard message that stores minimal info and is processed by destination shard
  struct CrossMsg has key {
    id: UID,
    from_zone: UID,
    to_zone: UID,
    actor: address, // player who initiated
    payload_hash: vector<u8>, // compact description (or full data)
    created_at: u64,
    processed: bool,
  }

  public(entry) fun send(msg: CrossMsg, ctx: &mut TxContext) { /* create object */ }

  // processed by destination shard/worker
  public(entry) fun process(msg: &mut CrossMsg) {
    assert!(!msg.processed, 1);
    // apply payload to destination zone/player
    msg.processed = true;
  }
}

Client builds: create CrossMsg (owner pays gas) → relayer/worker picks and calls process.

C — Short lock (contention avoidance) object

module game::lock {
  use sui::object::{UID};
  use sui::tx_context::TxContext;

  struct Lock has key {
    id: UID,
    resource_id: UID, // e.g., item id
    holder: address,
    expiry_epoch: u64,
  }

  public(entry) fun acquire_lock(resource: UID, holder: address, expiry: u64, ctx: &mut TxContext): Lock {
    // create a Lock for the resource; if already exists, fail
    Lock { id: object::new(ctx), resource_id: resource, holder, expiry_epoch: expiry }
  }

  public(entry) fun release_lock(lock: &mut Lock, caller: address) {
    assert!(lock.holder == caller, 1);
    object::delete(lock);
  }

  public(entry) fun reclaim_if_expired(lock: &mut Lock, now_epoch: u64) {
    assert!(now_epoch > lock.expiry_epoch, 2);
    object::delete(lock);
  }
}

Use this for contested resources (loot chests etc.). Keep expiry short.

D — Optimistic local updates with nonce + conflict resolution

module game::optimistic {
  use sui::object::{UID};

  public(entry) fun optimistic_move(player: &mut Player, expected_nonce: u64, dx: i64, dy: i64) {
    assert!(player.last_action_nonce == expected_nonce, 10);
    // apply move (update zone_id if crossing boundary -> create CrossMsg)
    player.last_action_nonce = player.last_action_nonce + 1;
  }
}

If nonce mismatches, client must refresh and retry.


Sharding governance & migration

  • Dynamic partitioning: monitor shard load and split hot shards into two sub-shards (reassign local objects). Implement a migration transaction that moves ownership of some object IDs to a new zone object. Make migrations small-batch and idempotent.
  • Rehashing strategy: choose deterministic mapping (grid → zone id) or hashing (object id → shard modulo N). Deterministic maps help with client routing.

Hotspot mitigation

Hotspots are inevitable (e.g., marketplace, boss fight). Strategies:

  • Move hotspot state off-chain (simulate fights off-chain, commit result root on-chain).
  • Rate limits & economic costs: tax frequent actions or require small deposit to create prepares.
  • Shard splitting on demand and move high-traffic NPCs/areas into separate shard(s).
  • Proxy caches: keep a small on-chain cache of top N entries for reads, update off-chain frequently.

Security & anti-cheat considerations

  • Canonical authority for physics/critical RNG: either deterministic serverless logic or verified randomness (beacon). Don’t trust client RNG.
  • Proofs for off-chain work: if heavy simulation runs off-chain, require merkle-proofed state transitions or zk-proofs for critical steps.
  • Sanity checks & invariants: assert invariants on commit (e.g., item supply remains stable, total gold conserved).

Observability & operational playbook

  • On-chain events: emit Action, CrossMsgCreated, ShardSplit, Reconcile events.
  • Metrics: shard tx/sec, avg objects touched per tx, avg lock hold time, failure/retry rates.
  • Auto-operators: off-chain agents that: process CrossMsgs, perform migrations, compact tombstoned objects.

Testing & validation

  • Concurrency fuzzing: run simultaneous actions on same/different shards.
  • Adversarial tests: simulate malicious clients creating many prepares/locks.
  • Scale tests: emulate thousands of players distributed spatially to see where contention arises.
  • Invariant tests: implement on-chain assert invariants and off-chain reconciler to verify totals.

Deployment checklist (practical steps)

  1. Start with coarse zones (large), then add monitoring.
  2. Instrument to detect hotspots (ops/sec per zone).
  3. When hot, split zone into subzones and migrate subset of objects (small batches).
  4. Introduce CrossMsg relayers (workers) to process async interactions reliably.
  5. Optimize: move noncritical, high-frequency updates off-chain and commit roots on-chain.

Example small workflow (player moves between zones)

  1. Player submits tx to move_in_zone(playerObj, targetZoneId) — if same zone: simple atomic update (touches player + zone). Parallel if disjoint.
  2. If moving to another zone: create CrossMsg to destination, remove minimal reservation from source (unlock), destination worker processes CrossMsg and finalizes. If failure, origin reverts or refund logic triggers after expiry.

Final recommendation

Shard by ownership and spatial locality: make every player, item, and zone its own small owned object; design common transactions to touch at most one shard. For cross-shard interactions use small async messages and short locks. Push heavy compute off-chain and use per-shard accumulators + lazy claiming to avoid N-way writes. Monitor for hotspots and split shards dynamically. With these patterns you’ll get high parallel throughput while keeping game logic auditable and secure.

0
Comentarios
.
Big Mike.
Sep 27 2025, 12:38

I’ve built two experimental strategy games on Sui, and what I learned is this: the bottleneck isn’t computation, it’s state serialization. If all players’ actions touch the same game object, you lose Sui’s parallelism.

My strategy is to shard by zone, player, or match instance, depending on the game. The parent object just acts as an index, pointing to independent shards.

The principles I follow are:

  1. Don’t store everything in the World object.

    • The world should just contain references to zones.
    • Each zone is its own object, so two players fighting in Zone A and Zone B don’t block each other.
  2. Treat zones as “ownership boundaries.”

    • If players move between zones, I model this as object transfer between shards.
    • That way, the executor can naturally separate transactions.

Here’s a simplified sketch:

module game::Sharded {
    use sui::object::{Self, UID};
    use sui::tx_context::TxContext;
    use std::vector;

    struct Zone has key {
        id: UID,
        resources: u64,
    }

    struct World has key {
        id: UID,
        zones: vector<UID>,
    }

    public fun create_world(ctx: &mut TxContext): World {
        World { id: object::new(ctx), zones: vector::empty<UID>() }
    }

    public fun create_zone(world: &mut World, ctx: &mut TxContext): Zone {
        let zone = Zone { id: object::new(ctx), resources: 100 };
        vector::push_back(&mut world.zones, object::id(&zone));
        zone
    }

    public fun harvest(zone: &mut Zone, amount: u64) {
        assert!(zone.resources >= amount, 100);
        zone.resources = zone.resources - amount;
    }
}

This design means harvesting in Zone 1 and Zone 2 can happen in parallel.

In practice, I also shard player inventories — each player has their own Inventory object. That way, when multiple players act at once, they don’t step on each other’s state.

I’ve benchmarked this, and the throughput difference between single global state and sharded state is massive: one scales linearly with players, the other stalls with just a handful of concurrent actions.

0
Comentarios
.
draSUIla.
Sep 27 2025, 13:28

My approach is to think of game state as a tree of ownership, where each leaf node is independently updatable. Instead of just sharding by region, I shard by player-owned subtrees.

That way, every player action touches only their subtree, unless two players interact directly.

Here’s how I structure it:

module game::TreeSharding {
    use sui::object::{Self, UID};
    use sui::tx_context::TxContext;

    struct Player has key {
        id: UID,
        inventory: u64,
    }

    struct World has key {
        id: UID,
        players: vector<UID>,
    }

    public fun add_player(world: &mut World, ctx: &mut TxContext): Player {
        let p = Player { id: object::new(ctx), inventory: 0 };
        vector::push_back(&mut world.players, object::id(&p));
        p
    }

    public fun gather(player: &mut Player, amount: u64) {
        player.inventory = player.inventory + amount;
    }
}

This design works because parallel execution comes for free when each player’s object is distinct.

The real complexity arises in PvP mechanics (e.g., battles). In those cases, I design a temporary battle object owned jointly by the participants. That isolates conflict into its own shard, instead of blocking the entire game world.

The trade-off with this model is cross-player coordination overhead, but I’ve found it scales way better than zone-based sharding alone.

0
Comentarios
.
lite.vue.
Sep 27 2025, 13:42

I come from a game dev background where state bloat is the main killer of scalability. In Sui, the trick is that shared objects serialize execution. So if I dump my entire game state into one giant shared object, I’ve basically recreated Ethereum’s bottleneck.

My solution is to shard gameplay objects along natural boundaries:

  • Each player inventory is an owned object.
  • Each battle arena or match is a separate shared object, so matches can run in parallel.
  • Leaderboards or global meta-state can be derived from events rather than stored centrally.

For example:

module game::arena {
    use sui::object::{UID, ID};
    use sui::tx_context::TxContext;

    struct Arena has key {
        id: UID,
        players: vector<ID>, // IDs of player objects
    }

    public fun create_arena(ctx: &mut TxContext): Arena {
        Arena { id: object::new(ctx), players: vector::empty<ID>() }
    }

    public entry fun join_arena(arena: &mut Arena, player: &mut Player) {
        vector::push_back(&mut arena.players, player.id);
    }
}

By keeping arenas independent, I ensure that:

  • Arena A and Arena B can process battles at the same time.
  • A player’s inventory updates don’t block other players.
  • Only when players interact in the same arena does serialization occur.

The recommended strategy is: model the game state as a graph of composable, fine-grained objects. That’s how you fully unlock parallel execution in Sui.

0
Comentarios
.

Sabes la respuesta?

Inicie sesión y compártalo.