Sui.

Допис

Діліться своїми знаннями.

0xF1RTYB00B5.
Sep 14, 2025
Питання та відповіді експертів

Sui’s parallel transaction execution

How can I, as a developer, leverage Sui’s parallel transaction execution to design dApps that minimize contention on shared objects while maximizing throughput?

  • Sui
  • Architecture
  • Transaction Processing
  • Security Protocols
10
10
Поділитися
Коментарі
.

Відповіді

10
Big Mike.
Sep 17 2025, 12:06

📘 How I Leverage Sui’s Parallel Transaction Execution as a Developer

When I build dApps on Sui, I design around its object-based concurrency model. The parallel executor is most powerful when transactions operate on disjoint objects. My goal is always to minimize contention on shared objects and let independent transactions scale horizontally.


🔑 Principle: Objects = Locks

  • If two transactions touch the same object, they serialize.
  • If they touch different objects, they run in parallel.
  • My design goal = break global state into many small, independent objects.

🛒 Example 1: NFT Marketplace

Naïve design (bad): One big Marketplace object → every listing and purchase updates it → contention.

Optimized design (good): Each listing is its own object. Buyers/sellers interact directly with listings.

module marketplace {

    struct Listing has key {
        id: UID,
        nft: Object<NFT>,
        price: u64,
        seller: address,
    }

    public entry fun create_listing(
        seller: &signer,
        nft: Object<NFT>,
        price: u64
    ): Listing {
        Listing {
            id: object::new(seller),
            nft,
            price,
            seller: signer::address_of(seller),
        }
    }

    public entry fun purchase(
        buyer: &signer,
        listing: &mut Listing,
        payment: Coin<SUI>
    ) {
        assert!(coin::value(&payment) >= listing.price, 0);
        transfer::transfer(listing.nft, signer::address_of(buyer));
        coin::transfer(payment, listing.seller);
    }
}

Result:

  • Each Listing is independent.
  • Thousands of purchases run in parallel.
  • Only contention is if 2 buyers go for the same listing.

💸 Example 2: Staking Pool

Naïve design (bad): All stakes and claims mutate one Pool object → bottleneck.

Optimized design (good): Each user gets a StakeReceipt object, while Pool only holds global params.

module staking {

    struct Pool has key {
        id: UID,
        reward_rate: u64,
        total_staked: u64,
    }

    struct StakeReceipt has key {
        id: UID,
        owner: address,
        amount: u64,
        last_update: u64,
    }

    public entry fun stake(
        user: &signer,
        pool: &mut Pool,
        tokens: Coin<SUI>
    ): StakeReceipt {
        let amount = coin::value(&tokens);
        pool.total_staked = pool.total_staked + amount;

        StakeReceipt {
            id: object::new(user),
            owner: signer::address_of(user),
            amount,
            last_update: clock::now_ms(),
        }
    }

    public entry fun claim_rewards(
        user: &signer,
        receipt: &mut StakeReceipt,
        pool: &Pool
    ) {
        let rewards = compute_rewards(receipt, pool.reward_rate);
        receipt.last_update = clock::now_ms();
        coin::mint<SUI>(signer::address_of(user), rewards);
    }
}

Result:

  • Alice and Bob claim rewards in parallel.
  • Contention only occurs on Pool during infrequent parameter changes.

🎮 Example 3: Gaming (Leaderboard)

Naïve design (bad): One Leaderboard object updated for every score → massive contention.

Optimized design (good): Each player has a PlayerScore object. Leaderboard views built off-chain.

module game {

    struct PlayerScore has key {
        id: UID,
        player: address,
        score: u64,
    }

    public entry fun update_score(
        player: &signer,
        score_obj: &mut PlayerScore,
        new_score: u64
    ) {
        if (new_score > score_obj.score) {
            score_obj.score = new_score;
        }
    }
}

Result:

  • Players update scores independently.
  • Global leaderboard built using events or indexers.

📊 Example 4: Token Transfers with Fee Buckets

Naïve design (bad): Every transfer mutates one FeeVault object.

Optimized design (good): Sharded FeeBuckets reduce contention.

module fees {

    struct FeeBucket has key {
        id: UID,
        amount: u64,
    }

    public entry fun collect_fee(
        bucket: &mut FeeBucket,
        fee: Coin<SUI>
    ) {
        bucket.amount = bucket.amount + coin::value(&fee);
    }
}

Result:

  • Transfers write into different buckets in parallel.
  • Later aggregation consolidates fees.

Here’s a conceptual web-style diagram that illustrates the bad vs. good design for the marketplace example:

image

10
Коментарі
.
Haywhy .
Sep 14 2025, 23:58

When I build DeFi protocols on Sui, I pay close attention to how state is represented, because that directly impacts how much parallelism the executor can exploit. If I model everything around a single shared pool object, every swap or stake operation will collide on that pool — and throughput collapses.

So, my approach is to isolate user activity into independent objects and minimize how often transactions hit a shared object.

Take a staking pool as an example. A basic design would use one shared Pool object where every user’s stake and rewards are tracked. That means every stake(), unstake(), and claim() call modifies the same object — a classic bottleneck.

On Sui, I redesign it so that each user gets a personal staking receipt object when they deposit.

  • The shared Pool object only stores global parameters (like reward rate and total supply).
  • Each user’s deposit lives in their own StakeReceipt object, which they can update independently.

That way:

  • Alice staking doesn’t block Bob staking.
  • Reward claims are parallelized because each claim touches only the claimant’s StakeReceipt.
  • The only serialized action might be a periodic pool-wide reward adjustment, but that happens infrequently.

Here’s a simplified Move sketch:

module staking {

    struct Pool has key {
        id: UID,
        reward_rate: u64,
        total_staked: u64,
    }

    struct StakeReceipt has key {
        id: UID,
        owner: address,
        amount: u64,
        last_update: u64,
    }

    // User stakes into the pool
    public entry fun stake(
        user: &signer,
        pool: &mut Pool,
        tokens: Coin<SUI>
    ): StakeReceipt {
        let amount = coin::value(&tokens);
        pool.total_staked = pool.total_staked + amount;

        StakeReceipt {
            id: object::new(user),
            owner: signer::address_of(user),
            amount,
            last_update: clock::now_ms(),
        }
    }

    // User claims rewards independently
    public entry fun claim_rewards(
        user: &signer,
        receipt: &mut StakeReceipt,
        pool: &Pool
    ) {
        let rewards = compute_rewards(receipt, pool.reward_rate);
        receipt.last_update = clock::now_ms();
        coin::mint<SUI>(signer::address_of(user), rewards);
    }
}

In this setup:

  • The hot path (claims, small deposits, withdrawals) is parallelized. Each user interacts mostly with their own object.
  • The cold path (pool parameters like reward rate) stays in the shared Pool object, but it’s rarely updated.

I also make sure my client handles optimistic retries. If two users happen to stake at the exact same moment and collide on pool.total_staked, one tx may fail — the client refreshes the pool and retries seamlessly.

This pattern of splitting per-user/per-asset state away from shared state is the key to throughput. It turns a design that would serialize every transaction into one where 99% of activity runs in parallel, and only the occasional global update requires serialization.

4
Коментарі
.
Joe.
Joe50
Sep 15 2025, 07:22

When I build DeFi protocols on Sui, I pay close attention to how state is represented, because that directly impacts how much parallelism the executor can exploit. If I model everything around a single shared pool object, every swap or stake operation will collide on that pool — and throughput collapses.

So, my approach is to isolate user activity into independent objects and minimize how often transactions hit a shared object.

Take a staking pool as an example. A basic design would use one shared Pool object where every user’s stake and rewards are tracked. That means every stake(), unstake(), and claim() call modifies the same object — a classic bottleneck.

On Sui, I redesign it so that each user gets a personal staking receipt object when they deposit.

  • The shared Pool object only stores global parameters (like reward rate and total supply).
  • Each user’s deposit lives in their own StakeReceipt object, which they can update independently.

That way:

  • Alice staking doesn’t block Bob staking.
  • Reward claims are parallelized because each claim touches only the claimant’s StakeReceipt.
  • The only serialized action might be a periodic pool-wide reward adjustment, but that happens infrequently.

Here’s a simplified Move sketch:

module staking {

    struct Pool has key {
        id: UID,
        reward_rate: u64,
        total_staked: u64,
    }

    struct StakeReceipt has key {
        id: UID,
        owner: address,
        amount: u64,
        last_update: u64,
    }

    // User stakes into the pool
    public entry fun stake(
        user: &signer,
        pool: &mut Pool,
        tokens: Coin<SUI>
    ): StakeReceipt {
        let amount = coin::value(&tokens);
        pool.total_staked = pool.total_staked + amount;

        StakeReceipt {
            id: object::new(user),
            owner: signer::address_of(user),
            amount,
            last_update: clock::now_ms(),
        }
    }

    // User claims rewards independently
    public entry fun claim_rewards(
        user: &signer,
        receipt: &mut StakeReceipt,
        pool: &Pool
    ) {
        let rewards = compute_rewards(receipt, pool.reward_rate);
        receipt.last_update = clock::now_ms();
        coin::mint<SUI>(signer::address_of(user), rewards);
    }
}

In this setup:

  • The hot path (claims, small deposits, withdrawals) is parallelized. Each user interacts mostly with their own object.
  • The cold path (pool parameters like reward rate) stays in the shared Pool object, but it’s rarely updated.

I also make sure my client handles optimistic retries. If two users happen to stake at the exact same moment and collide on pool.total_staked, one tx may fail — the client refreshes the pool and retries seamlessly.

This pattern of splitting per-user/per-asset state away from shared state is the key to throughput. It turns a design that would serialize every transaction into one where 99% of activity runs in parallel, and only the occasional global update requires serialization.

4
Коментарі
.
BigLoba.
Sep 15 2025, 10:00

When I build on Sui, I think about concurrency from the very start. The parallel executor gives me free scalability, but only if I model my dApp so transactions don’t keep colliding on the same objects. My strategy is to minimize shared-object contention and design my state layout so that the majority of updates can run independently.

For example, instead of using a single global registry or counter, I break things down into per-user or per-asset objects. That way, a user updating their profile or balance doesn’t block another user from doing the same. If I really need global views, I rely on sharded structures or append-only objects where each transaction writes to its own partition, and I aggregate the results asynchronously.

Dynamic fields are another trick I use: by pushing frequently changing substate into child objects, I make sure that concurrent updates hit different object IDs. This keeps the parent object stable and reduces conflicts.

I also optimize transactions to touch the smallest possible object set. The fewer objects a tx declares, the higher the chance it can run in parallel. On the client side, I code for optimistic retries—if a transaction fails due to version mismatch, the app refreshes the latest state and resubmits seamlessly.

Finally, I keep read-heavy work off-chain by emitting events and indexing them. That way, my on-chain path is lean and optimized purely for consensus-critical writes.

In short: I treat objects like locks. By partitioning state, offloading reads, and keeping shared objects light, I let Sui’s parallelism do the heavy lifting. That’s how I consistently push high throughput while keeping user experience smooth.

4
Коментарі
.
DollyStaff.
Sep 14 2025, 22:43

As a Web3 developer working with Sui, I've found that designing dApps that maximize parallel execution while minimizing contention requires careful consideration of object relationships and transaction patterns. Here's my structured approach to achieving optimal performance:

Understanding Object Independence

First, let me visualize how Sui's object-centric architecture enables parallel processing:

sequenceDiagram
    participant T1 as Transaction 1
    participant T2 as Transaction 2
    participant O1 as Object A
    participant O2 as Object B
    participant Consensus

    Note over T1,T2: Parallel Execution Example
    
    par Independent Objects
        T1->>O1: Read Object A
        O1-->>T1: Return State
        T1->>Consensus: Validate TX1
        Consensus-->>T1: Validated
        T1->>O1: Update Object A
    and Independent Objects
        T2->>O2: Read Object B
        O2-->>T2: Return State
        T2->>Consensus: Validate TX2
        Consensus-->>T2: Validated
        T2->>O2: Update Object B
    end
    
    Note over T1,T2: Shared Object Contention Example
    
    par Contended Objects
        T1->>O1: Read Object A
        O1-->>T1: Return State
        T2->>O1: Read Object A
        O1-->>T2: Return State
        T1->>Consensus: Validate TX1
        Consensus-->>T1: Validated
        T2->>Consensus: Validate TX2
        Consensus-->>T2: Validated
        T1->>O1: Update Object A
        T2->>O1: Update Object A
    end

The diagram above illustrates two key scenarios I encounter in my development work:

  1. Parallel Execution: Transactions operate independently on separate objects, allowing true concurrent processing through shared consensus validation
  2. Contention Scenario: Multiple transactions competing for the same object, requiring sequential processing despite shared consensus

Based on my experience, here are the practical patterns I use to maximize parallel execution while minimizing contention:

My Data Structure Patterns

// Pattern 1: User-specific objects
struct UserProfile {
    id: ObjectId,
    owner: Address,
    personal_data: PersonalData,
}

// Pattern 2: Sharded data structures
struct DataShard {
    shard_id: ShardId,
    entries: Vec<Entry>,
    metadata: ShardMetadata,
}

My Transaction Patterns

// Pattern 1: Independent operations
async function updateProfile(user_id: string, updates: Partial<PersonalData>) {
    const profile = await sui.getObject<UserProfile>(user_id);
    return sui.executeTransaction({
        kind: 'object',
        inputs: [profile],
        sequenceNumber: profile.sequenceNumber,
        data: { 
            personal_data: updates 
        }
    });
}

// Pattern 2: Batched operations
async function batchUpdateProfiles(updates: Map<string, Partial<PersonalData>>) {
    const profiles = await Promise.all(
        Array.from(updates.keys()).map(id => 
            sui.getObject<UserProfile>(id)
        )
    );
    
    return sui.executeBatchTransaction(profiles.map((profile, idx) => ({
        kind: 'object',
        inputs: [profile],
        sequenceNumber: profile.sequenceNumber,
        data: { 
            personal_data: updates.get(Array.from(updates.keys())[idx]) 
        }
    })));
}

Implementation Guidelines I Follow

  1. Object Organization - I create user-specific objects instead of shared pools
  • Implement data sharding based on usage patterns
  • Use separate objects for frequently updated fields
  1. Transaction Design - I batch operations on independent objects
  • Minimize cross-object dependencies
  • Use version numbers for optimistic concurrency control
  1. Performance Optimization - Monitor contention patterns in production
  • Adjust object granularity based on usage
  • Implement retry logic for contended transactions

Best Practices I've Developed

  1. Object Structure - I keep frequently accessed data in separate objects
  • Group related but infrequently updated data together
  • Implement efficient version resolution mechanisms
  1. Transaction Flow - Design transactions to operate on independent objects
  • Use batch processing for related operations
  • Implement idempotent transaction handlers

Through careful application of these patterns, I've successfully minimized contention while maximizing throughput in my dApps. The key is understanding how Sui's parallel processing works and designing your applications to take full advantage of it.

3
Коментарі
.
LOLLYPOP.
Sep 14 2025, 22:53

When I leverage Sui’s parallel transaction execution, my main goal is to make sure that high-frequency user actions don’t pile up on a single shared object. In DeFi, that’s usually the biggest trap: a staking pool, lending vault, or liquidity pool is often modeled as one global object, which forces serialization.

Let me give a concrete example. Suppose I’m building a staking dApp where thousands of users deposit tokens into a pool and earn rewards. If I just use one StakingPool shared object that holds all deposits and reward state, every single deposit or withdrawal will contend on that one object. Even though Sui supports parallel execution, those transactions will serialize, killing throughput.

The way I solve this is by separating user state from global pool state. Instead of writing directly into the pool object, I give each staker their own StakeReceipt object. That receipt records how much they staked, when they staked, and any claimable rewards. The pool object only stores aggregate parameters (like reward rate or epoch length), which change infrequently.

Here’s a simplified Move-style sketch:

module staking {

    struct Pool has key {
        id: UID,
        reward_rate: u64, // tokens per epoch
        total_staked: u64,
    }

    struct StakeReceipt has key {
        id: UID,
        owner: address,
        amount: u64,
        start_epoch: u64,
    }

    // User stakes by creating their own independent receipt
    public entry fun stake(
        user: &signer,
        pool: &mut Pool,
        coins: Coin<SUI>
    ): StakeReceipt {
        let amount = coin::value(&coins);
        pool.total_staked = pool.total_staked + amount;

        StakeReceipt {
            id: object::new(user),
            owner: signer::address_of(user),
            amount,
            start_epoch: epoch::current(),
        }
    }

    // User later redeems rewards and withdraws
    public entry fun unstake(
        user: &signer,
        pool: &mut Pool,
        receipt: StakeReceipt
    ): Coin<SUI> {
        assert!(signer::address_of(user) == receipt.owner, 0);

        let reward = (epoch::current() - receipt.start_epoch) * pool.reward_rate;
        pool.total_staked = pool.total_staked - receipt.amount;

        // Return stake + reward
        coin::mint<SUI>(receipt.amount + reward, user)
    }
}

With this design:

  • Each stake action creates its own StakeReceipt object, so deposits from different users don’t collide.
  • The only field in the pool object that updates often is total_staked, which I can further optimize by sharding pools or updating it asynchronously in batches.
  • Unstaking only touches the staker’s own receipt plus a light update to the pool.

The effect is that 99% of user interactions are parallelizable because they’re isolated to unique receipt objects. Only rare global updates (like adjusting reward rates or consolidating totals) touch the pool object.

On the client side, I still plan for contention — if two users try to unstake at the exact same time, one may get a version mismatch on the pool. In that case, the client just retries with the updated version.

By restructuring state like this, I turn a potential bottleneck (one global pool object) into a scalable design where the more users stake, the more parallel objects exist to absorb that traffic. That’s how I maximize throughput while keeping contention minimal.

3
Коментарі
.
MOT.
MOT40
Sep 14 2025, 22:54

When I’m building on Sui, I always remind myself that parallel execution works best when transactions don’t overlap on the same objects. That means if I design my contracts around one or two global shared objects, I’m basically throwing away Sui’s advantages. The trick is to structure state so that writes are distributed across many objects.

Let’s take the example of a staking protocol. A naïve design would be to have a single StakingPool shared object that holds all delegations, rewards, and stakes. Every time a user stakes or unstakes, that one pool object gets mutated. On Ethereum or similar chains this is common, but on Sui this would immediately serialize all stake-related transactions into a bottleneck.

Instead, I design the system so that each staker gets their own delegation object. When Alice stakes, a new Delegation object is created under her address. That object stores how much she staked and when. Bob’s delegation is a totally separate object. Both Alice and Bob can stake, restake, or withdraw in parallel without blocking each other, because their objects don’t overlap.

Here’s a simplified Move-like sketch:

module staking {

    struct Delegation has key {
        id: UID,
        owner: address,
        amount: u64,
        timestamp: u64,
    }

    // Create a delegation object for a new staker
    public entry fun stake(
        user: &signer,
        amount: Coin<SUI>
    ): Delegation {
        let id = object::new(user);
        let now = clock::now();
        coin::burn(amount); // simulate staking locked
        Delegation { id, owner: signer::address_of(user), amount: coin::value(&amount), timestamp: now }
    }

    // Unstake by consuming the Delegation object
    public entry fun unstake(
        user: &signer,
        delegation: Delegation
    ): Coin<SUI> {
        // logic to calculate rewards omitted
        let rewards = delegation.amount;
        coin::mint(rewards, signer::address_of(user))
    }
}

In this setup:

  • Alice and Bob can stake at the same time without contention.
  • The only time contention happens is if Alice herself tries to perform two simultaneous actions on her own delegation object, which is expected.
  • The protocol can handle thousands of users staking or unstaking in parallel, since each user touches only their own delegation object.

For global metrics like total staked, I avoid constantly mutating a shared object. Instead, I emit events when users stake or unstake, and an off-chain indexer or analytics service aggregates those totals in real time. If I really need an on-chain total, I shard the tally across multiple “bucket” objects and update them randomly or deterministically by user, which spreads load instead of concentrating it.

This pattern — per-user/per-entity objects, sharded aggregates, and event-driven global views — is how I consistently avoid bottlenecks. It’s why Sui’s parallelism feels natural: I design state to match how users interact, not as one big shared ledger.

So in practice, by modeling each staker with their own object, I transform a staking pool from a single-threaded choke point into a massively parallel system. That’s the kind of design that lets me scale throughput without sacrificing correctness.

3
Коментарі
.
Aquila007.
Sep 14 2025, 23:06

When I think about leveraging Sui’s parallel execution, I always start by asking: where will contention naturally happen? If multiple transactions constantly touch the same shared object, they will serialize, and I lose throughput. So my job as a developer is to design around that by partitioning state.

Take a concrete example: suppose I’m building a decentralized game marketplace where players can buy and sell in-game items. The naïve design would be to keep one big Marketplace shared object that stores all listings. Every new listing or purchase would mutate this object — and that would create a massive bottleneck because every transaction depends on the same shared object.

Instead, on Sui, I redesign the marketplace so that each listing is its own object.

  • When a seller lists an item, the system creates a Listing object with its own ID, price, and ownership.
  • Buyers interact directly with that Listing object when they purchase, without touching a global registry.

This way, multiple purchases across different listings happen in parallel with no contention. The only time I might touch a shared object is for things like protocol-level fees, but even then I often shard those fee buckets to spread the load.

Here’s a simplified Move-style sketch:

module marketplace {

    struct Listing has key {
        id: UID,
        item: Object<Item>,
        price: u64,
        seller: address,
    }

    // Seller creates a new listing
    public entry fun create_listing(
        seller: &signer,
        item: Object<Item>,
        price: u64
    ): Listing {
        Listing {
            id: object::new(seller),
            item,
            price,
            seller: signer::address_of(seller),
        }
    }

    // Buyer purchases directly from the listing object
    public entry fun purchase(
        buyer: &signer,
        listing: &mut Listing,
        payment: Coin<SUI>
    ) {
        assert!(coin::value(&payment) >= listing.price, 0);

        // transfer item to buyer
        transfer::transfer(listing.item, signer::address_of(buyer));

        // pay seller
        coin::transfer(payment, listing.seller);
    }
}

Notice how each Listing object is independent. That means:

  • Two buyers purchasing two different items don’t block each other.
  • Thousands of transactions can run in parallel because they don’t touch the same object.
  • The executor only serializes if two buyers race for the same listing — which is the correct, expected contention.

On the client side, I also handle retries gracefully: if two buyers try to purchase the same listing at once, one transaction succeeds and the other fails with a version mismatch. The failed client simply gets an error and can refresh the latest state.

By designing the state around independent objects, I let Sui’s parallel executor shine. Instead of a single bottlenecked Marketplace object, I get a system that naturally scales with demand — every listing is its own concurrency lane.

That’s the mindset I carry across all my dApps: look for potential hotspots, break them into finer-grained objects, and make retries a normal part of the workflow. That way, I maximize throughput and deliver a smooth experience to users.

3
Коментарі
.
Champ✊🏻.
Sep 14 2025, 23:39

I design Sui dApps so they touch as few shared objects as possible, partition state into many small objects, and use optimistic retry patterns — that way Sui’s parallel executor can run many transactions concurrently and my app avoids hot-object contention that kills throughput.


How I actually do it — practical patterns

1. Prefer per-actor objects over a global object

I store mutable state per user/session (one object per user, e.g., UserProfile::<id> or Balance::<user_id>). Because Sui executes transactions in parallel when their object-sets don’t overlap, having many small objects gives me maximal parallelism. Global single-writer structures are the biggest throughput bottleneck.

2. Replace global counters with sharded/append-only patterns

I avoid a single global counter. Instead I:

  • shard counters (e.g., counter_shard_0, counter_shard_1, ...), and pick a shard deterministically or randomly per tx; or
  • use append-only objects (each writer creates a small “append” object) and have a background aggregator batch-merge them into a compact view in off-chain indexers or a single consolidation tx performed rarely.

This converts many conflicting writes into independent writes that can run in parallel and only occasionally require aggregation.

3. Use dynamic fields / child objects for per-item state

I put frequently updated substate into dynamic fields or child objects rather than mutating one parent object. Dynamic fields are separate objects under the hood, so accesses that only touch different fields do not conflict.

4. Avoid shared objects where possible; when needed, isolate their hot-path

Shared objects are inherently serialized for safety. If I must use a shared object (e.g., a marketplace listing registry), I keep the shared object small and store heavyweight or frequently-changing data off it (per-listing object or per-seller object). I also move non-critical operations off-chain or into asynchronous flows.

5. Make transactions touch a minimal object set

I structure transactions to:

  • declare only necessary reads/writes,
  • split bigger workflows into multiple smaller transactions (for example: reserve → confirm → finalize),
  • move read-only checks to client-side or separate read-only RPC calls when safe.

Smaller object-sets = higher chance of non-overlap = parallel execution.

6. Use optimistic retries and idempotent operations on the client

I build clients to expect contention: they submit a tx, and if it fails due to object version conflicts, they fetch the latest object versions and retry with exponential backoff. I design my on-chain entrypoints to be idempotent where possible so retries are safe.

Example (pseudo-JS client flow):

async function tryTx(buildTx, maxRetries = 5) {
  for (let i = 0; i < maxRetries; i++) {
    const tx = await buildTx(); // reads fresh object refs, builds intent
    const res = await submit(tx);
    if (res.success) return res;
    // on conflict, refresh local view and retry
    await sleep(50 * Math.pow(2, i));
  }
  throw new Error("tx failed after retries");
}

7. Offload read-heavy / non-consensus work to indexers and off-chain stores

I emit events for actions and let an indexer or a secondary DB build the materialized view for UI queries. That avoids putting heavy read or aggregation load on on-chain objects and keeps transactional paths lean.

8. Use role-based single-committers for heavy coordination

For workloads that truly require a single serialization point (eg. final settlement), I isolate that into a single committer actor that batches many incoming operations and commits them in bulk. Writers append to per-writer objects (cheap and parallel); the committer later consumes those appends and updates the canonical object in a controlled manner.

9. Design data models with concurrency in mind (CRDT-like)

Where possible, I model state as mergeable — e.g., use commutative operations (add-only sets, monotonic counters) so I can merge concurrent updates off-chain or with a deterministic on-chain merge that avoids conflicts.

10. Monitor and iterate: measure hot objects and refactor

I log which object IDs see the most updates and which transactions conflict the most. If a particular object is hot, I refactor it into smaller pieces or move to the shard/append pattern. Instrumentation guides the refactor work.


Concrete checklist I follow before launching a feature

  1. Can this state be per-user or per-entity instead of global? If yes, do it.
  2. Will many users update this same object concurrently? If yes, shard or make it append-only.
  3. Can I move non-essential updates off-chain or into events? If yes, emit an event and update indexers instead of the object.
  4. Are my transactions small and focused? If not, split them.
  5. Do clients implement retry/backoff and idempotency? If not, add it.

Final note

Sui’s parallel execution is powerful if you design for object-disjointness. I treat objects as the unit of concurrency: minimize shared-object touches, partition state aggressively, and use retries + off-chain aggregation for anything that would otherwise serialize. Following those principles, I get high throughput and a much better UX under load.

3
Коментарі
.
Mustarphy1.
Sep 14 2025, 23:51

When I design dApps on Sui, I approach parallel transaction execution as both an opportunity and a constraint. The opportunity is obvious: I can get massive throughput if my transactions don’t compete for the same objects. The constraint is that if I fall back on global, shared objects for core logic, I end up bottlenecked just like on traditional chains.

So the first design choice I make is to model state in a way that distributes ownership. For example, instead of one marketplace object that everyone writes into, I let each listing or each seller have their own object. That way, updating or purchasing one listing doesn’t block activity elsewhere.

For situations where a global counter or state seems unavoidable, I ask: can I restructure this into a sharded or composable design? A good pattern I use is append-only writes: each actor creates a record object that represents their contribution, and then I aggregate these records later, either through an off-chain indexer or via a controlled on-chain batch transaction. This approach lets me absorb a high volume of concurrent updates without serializing everything through one “hot” object.

Another key technique is object granularity. I keep objects small and scoped so that transactions touch only the data they truly need. If I see contention in testing, I break objects down further, often into child objects or dynamic fields. This keeps the executor free to schedule non-overlapping transactions in parallel.

On the client side, I design my workflow around optimistic concurrency. I expect conflicts to happen, so my dApp retries failed transactions gracefully. I also make on-chain functions idempotent where possible, so a retried tx doesn’t risk duplicating an effect.

Finally, I offload read-heavy or non-critical state to external indexers. Events give me a way to reconstruct the system view without forcing every detail into on-chain shared state. This not only reduces contention but also improves the user experience with faster queries.

In practice, leveraging Sui’s parallelism is about respecting the object model: avoid hot spots, shard aggressively, and design for retries. When I follow those principles, I get the full benefit of parallel execution—high throughput, low latency, and a much smoother scaling path for my dApps.

3
Коментарі
.

Ви знаєте відповідь?

Будь ласка, увійдіть та поділіться нею.