Terms like
controller, VaultInterface, and the ERC-7540 async request/claim pattern are defined in Overview and Base operations. Start there if anything below reads as unfamiliar.- Prose when the failure applies identically on both hub and spoke (e.g., exchange-rate drift, preview reverts).
- Chain label in the heading when the failure only fires on one chain (e.g.,
NoPendingClaimis spoke-only). - Tabs for hub vs spoke when the same semantic failure surfaces as a different error depending on where you call it.
Controller parameter
controller in requestRedeem(shares, controller, owner) is the address authorized to claim the resulting withdrawal request. It is the most important parameter for cross-chain integrators to get right.
Two patterns:
- Pattern 1 — contract is controller. Works on the hub vault only. The contract can call
redeemByIdon the hub vault directly to claim. Does not work on a spoke — spokeredeemenforcesmsg.sender == controllerwith no operator-delegation path, and a contract that lives on a spoke cannot directly call the hub vault to recover. - Pattern 2 — user EOA is controller. Works on both hub and spoke. The user calls
redeemorredeemByIddirectly. This is the correct pattern for any cross-chain integration where the user should be able to claim from a spoke.
Controller mismatch: InvalidRequest vs InputMustBeSender
Same semantic failure (“the caller isn’t authorized to claim this request”), different error surface depending on where you called redeem — this is a chain-variant failure.
- Hub vault (Base)
- Spoke `VaultInterface`
Error:
InvalidRequest() — overloaded. The hub vault uses this single error for three distinct failures:- The
requestIddoesn’t exist. - The request was already claimed (the vault zeros
unlockTimeon claim). - The caller is not the
controllerand not an approved operator.
- Recall that
controlleris the address authorized to claim. - Call
getWithdrawalRequest(requestId). If this call also reverts withInvalidRequest(), the request doesn’t exist or was already claimed. - If
getWithdrawalRequestreturns aWithdrawalRequeststruct, comparereq.controllerto the caller you were using forredeemById. - If they don’t match, check operator approval with
isOperator(req.controller, msg.sender). An approved operator can also claim (seesetOperatorsemantics below).
setOperator semantics
setOperator(operator, approved) is an ERC-7540 standard function, and it’s permissionless — think of it as ERC-20.approve() but for vault operations.
setOperator(operator, approved)writesisOperator[msg.sender][operator] = approved. It only delegates access to the caller’s own withdrawal requests. There’s no admin role or front-running concern — a third party callingsetOperatoronly grants access to their own requests, not yours.- The hub vault additionally reverts if
operator == msg.sender(OperationNotAllowed). A contract cannot self-delegate viasetOperator; this closes a self-operator trick that would otherwise let Pattern-1-on-spoke contracts rescue themselves. - Operator delegation only works on the hub vault. Spokes do not honor it (see the
InputMustBeSendertab above).
Multi-spoke origin tracking
Scope declaration. YieldPoint’s integration guide does not support the multi-spoke-concurrent-request pattern. If your controller has pending withdrawal requests on more than one spoke at the same time, you are outside the supported workflow. This section describes the failure modes for readers who land here from a failing integration, plus a detection recipe and — where possible — a recovery path.
Why it breaks
The hub vault processes withdrawal requests in chain-agnostic FIFO order. TheWithdrawalRequest struct has no sourceEid or origin-chain field — a request issued from Avalanche and a request issued from Katana are indistinguishable in hub storage. Each spoke tracks its own per-controller pendingClaims counter, which increments on requestRedeem and decrements on redeem, but the counters don’t sync cross-chain. Three failure modes arise from this mismatch:
- Cross-spoke sweeping. A
redeemcall from spoke A can settle requests originated from spoke B, because the hub processes FIFO and doesn’t know the origin. - Partial stranding. If a spoke claim consumes its ticket for less than the full hub-side request amount, the remaining shares are only claimable directly from the hub, not from either spoke.
- Orphaned tickets. After cross-spoke sweeping, the “source” spoke still has
pendingClaims > 0(its counter wasn’t decremented because the hub settled into a different spoke’s claim). A follow-upredeemon the source spoke consumes the ticket on the spoke side (counter decrements, LZ fee debited) but the hub rejects withERC4626ExceededMaxRedeem(or similar) because the matching hub state has already been settled. The ticket is burned, the gas tank is debited, and nothing arrives.
Pre-claim detection recipe
Before callingredeem on a spoke, confirm no concurrent cross-spoke claim is in flight:
- Call
getWithdrawalRequests(controller, offset, limit)on the hub vault on Base — note the three-arg signature. See the Base operations function reference for the full return shape. - Compare the hub array length to the sum of your own per-spoke
pendingClaims(controller)counts across all spokes your integration tracks. - If the hub count exceeds the sum of your per-spoke counts, a concurrent request is in flight from a chain you’re not tracking. Either wait for the in-flight round trip to settle, or claim directly on the hub with
redeemById(requestId, receiver).
Recovery preconditions
If you’re already stranded (a spoke ticket is burned, a hub request is stuck), recovery depends on which controller pattern you chose:- Pattern 2 (user EOA as controller): The user calls
redeemById(requestId, receiver)on the hub vault on Base directly. This requires the user to transact on Base — hold ETH for gas and switch chains in their wallet. It works but the UX is “leave the spoke.” - Pattern 1 (contract as controller): If the contract lives on Base, call
redeemByIddirectly. If the contract lives on a spoke and cannot call Base, there is no clean recovery path. The contract cannot self-delegate (setOperatorreverts on self), and the user EOA cannot claim either (they’re not the controller). Partners who reach this state should escalate to YieldPoint integration support via Discord#integrations. - Design-time fix: use Pattern 2 from the start. If you’re reading this because you’re stuck, pattern choice is the root cause — future integrations of the same flow should adopt Pattern 2.
FeeExceedsAmount on deposit or redeem request
Error: FeeExceedsAmount(uint256 fee, uint256 amount). Source contract: UTYVaultInterface (on each spoke) for the three-arg deposit(assets, receiver, controller) and requestRedeem(shares, controller, owner) calls. Also fires on the hub UTY vault’s own deposit path for the same underflow condition.
The flat deposit and redeem fees are deducted from the amount before bridging. If the amount is less than or equal to the fee, the call reverts — the vault won’t accept a zero-value or negative-value post-fee amount.
Fix. Before calling, read the current fee and require strict inequality:
- For
requestRedeem(shares, ...): requireshares > redeemFlatFee(). - For
deposit(assets, ...): requireassets > depositFlatFee().
fee and amount fields on the error tell you exactly how far under the threshold you were — useful for UI that wants to display “minimum redeemable” hints.
NoPendingClaim() on the spoke
The spoke VaultInterface.redeem requires pendingClaims[controller] > 0 before it fires the cross-chain message. Without a prior requestRedeem on the same spoke from the same controller, the call reverts with NoPendingClaim().
The counter is per-controller and per-spoke. A requestRedeem on Avalanche does not create a claim ticket on Katana. If you request on Avalanche and try to claim on Katana, you get NoPendingClaim() — which is also the signal that the design is multi-spoke and you should revisit the scope declaration above.
ERC4626ExceededMaxRedeem on premature claim (UTY only)
Applies to the UTY vault, which has a non-zero bonding period (7 days at current config). The spoke does not gate the claim client-side — it forwards the cross-chain message, and the hub rejects because maxRedeem(controller) returns 0 during bonding. You see ERC4626ExceededMaxRedeem (or a similar standard ERC-4626 revert) after the LZ round trip.
Fix: check request.unlockTime against block.timestamp before calling redeem. For yUTY (bondingPeriod == 0), this check is trivially satisfied in the next block.
Preview functions revert on async vaults
previewRedeem() and previewWithdraw() are ERC-4626-native and don’t know about ERC-7540’s request-and-claim pattern. On the hub vaults they revert with ERC7540NotSupported. Use the async equivalents instead:
| Don’t call | Call this instead |
|---|---|
previewRedeem(shares) | previewRequestRedeem(shares) |
previewWithdraw(assets) | previewRequestWithdraw(assets) |
assets for shares, or shares for assets) at the current exchange rate.
Exchange-rate drift on yUTY
The mainnet yUTY vault has received donations (that’s how yield is distributed —donate() increases the per-share value), so totalAssets() != totalSupply() and a yUTY share is worth more than one UTY at any given time. Use convertToAssets(shares) and convertToShares(assets) for accurate conversions; don’t assume 1:1.
UTY does not have this drift because the UTY vault overrides donate() to revert (preserving the 1:1 peg with USDC).
InsufficientFunds from gas-tank depletion
Error: InsufficientFunds(uint256 availableFunds, uint256 requiredFunds). Source contracts: UTYVaultInterface (on each spoke, for outbound spoke → hub messages) and UTYVaultComposer (on the hub, for return-hop messages back to a spoke).
Under normal operation, YieldPoint ops keeps the gas tanks funded. Depletion is rare but possible — if it happens, this is how to recognize and recover:
Observable signal. A spoke-side GasTankDebited event is emitted without the corresponding hub-side event (hub RedeemRequest for a withdrawal request, or hub composer GasTankDebited for a deposit return-hop) arriving within the typical LayerZero latency window (under a minute). If you see a spoke GasTankDebited.guid with no matching hub event after a reasonable wait, suspect gas-tank depletion.
Funds are safe. The LayerZero message is queued at the endpoint, not lost. LZ message retries are permissionless — once ops refills the relevant gas tank, anyone can retry the message, and the user need not take any action. No funds are at risk during the depletion window.
Inspection. Look up the stuck message at https://layerzeroscan.com/tx/<txhash> — LayerZero Scan is the canonical viewer for message status.
Escalation. If the message remains unexecuted beyond the typical latency window (say, more than a few minutes), contact the YieldPoint integration team via Discord #integrations.