-
Notifications
You must be signed in to change notification settings - Fork 87
feat: add light account loader #2215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThis PR introduces zero-copy (Pod-based) account support for Light Protocol's compression system, renames the core macro from Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Rationale: High heterogeneity across macros, codegen, and SDK layers. Introduces new trait hierarchies ( Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| } | ||
|
|
||
| /// A field marked with #[light_account(init)] | ||
| #[allow(dead_code)] // is_zero_copy is read via From<PdaField> conversion in program module |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
| } | ||
|
|
||
| /// Generate mutable reference to account data (handles Box<Account> vs Account). | ||
| /// Generate mutable reference to account data (handles Box<Account>, Account, AccountLoader). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check the error for Box<Box>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
sdk-libs/macros/src/light_pdas/README.md (3)
1-3: Update documentation title to reflect the macro rename.The title still refers to "Rent-Free Macros," but the macro has been renamed from
#[rentfree_program]to#[light_program](as shown on lines 15, 42). This creates confusion about the current naming convention.📝 Suggested title update
-# Rent-Free Macros +# Light Program Macros -Procedural macros for generating rent-free account types and their hooks for Solana programs. +Procedural macros for generating compressed account types and their hooks for Solana programs.
7-29: Fix directory path in the structure example.The directory structure example shows
rentfree/as the root directory, but this file is located atsdk-libs/macros/src/light_pdas/README.md. This mismatch will confuse developers trying to navigate the codebase.📝 Suggested directory structure fix
-rentfree/ +light_pdas/ ├── mod.rs # Module declaration ├── README.md # This file
33-38: Update section header to use consistent naming.The section header uses "RentFree Derive Macro," which is inconsistent with the macro rename to
#[light_program]. Consider updating to "Light Accounts Derive Macro" or similar terminology that aligns with the new naming convention.📝 Suggested section header update
-### `accounts/` - RentFree Derive Macro +### `accounts/` - Light Accounts Derive Macrosdk-libs/macros/docs/features/light-features.md (1)
1-3: Title inconsistency with renamed macro.The document title still says "Light Protocol RentFree Features" while the macro has been renamed to
#[light_program]. Consider updating the title for consistency with the new naming convention.-# Light Protocol RentFree Features +# Light Protocol Light Program Features -This document covers the 17 features available in Light Protocol's rentfree macro system for creating compressed (rent-free) accounts and tokens. +This document covers the 17 features available in Light Protocol's light_program macro system for creating compressed (rent-free) accounts and tokens.sdk-libs/macros/docs/light_program/codegen.md (1)
7-29: Add language specifier to fenced code block.The directory structure code block is missing a language identifier. While not critical, this helps with consistent rendering and satisfies linting rules.
-``` +```text sdk-libs/macros/src/rentfree/program/sdk-libs/macros/docs/light_program/architecture.md (2)
29-34: Inconsistent terminology in discovery diagram.Line 31 still references
#[rent-free]in the discovery process description. For consistency with the#[light_program]rename, consider updating this to reflect the current attribute naming (e.g.,#[light_account]or similar).
208-208: Consider renamingRentFreeInstructionErrorfor consistency.The error type still uses the "RentFree" prefix. If this is a public API type, you may want to rename it to
LightInstructionErroror similar to align with thelight_programnaming convention. This is a minor consistency nit.sdk-libs/macros/src/light_pdas/account/decompress_context.rs (1)
47-103: Passsystem_accounts_offsettolight_pre_initor pre-sliceremaining_accountsbefore constructingCpiAccounts.The
CpiAccounts::new()constructor explicitly documents: "Theaccountsslice must start at the system accounts." The example shows slicing as&remaining_accounts[system_accounts_offset..]before construction. However, the generated code inbuilder.rs(line 390) passes_remainingdirectly without this offset adjustment. The runtime'sprocess_decompress_accounts_idempotentcorrectly slices before constructingCpiAccounts(line 223), butlight_pre_initreceives nosystem_accounts_offsetparameter, making it unable to perform the required offset slicing. Either addsystem_accounts_offsetto thelight_pre_initsignature so the macro builder can slice appropriately, or handle the slicing in the instruction signature itself before passing to the pre-init hook.sdk-libs/macros/src/light_pdas/accounts/light_account.rs (1)
584-659: AccountLoader detection misses boxed loaders.
Box<AccountLoader<'info, T>>won’t be detected, so thezero_copyrequirement can be skipped and lead to an incompatible decompression path.🛠️ Proposed fix (recursive AccountLoader detection)
-fn is_account_loader_type(ty: &Type) -> bool { - if let Type::Path(type_path) = ty { - return type_path - .path - .segments - .iter() - .any(|seg| seg.ident == "AccountLoader"); - } - false -} +fn is_account_loader_type(ty: &Type) -> bool { + match ty { + Type::Path(type_path) => { + if let Some(seg) = type_path.path.segments.last() { + if seg.ident == "AccountLoader" { + return true; + } + if seg.ident == "Box" { + if let syn::PathArguments::AngleBracketed(args) = &seg.arguments { + if let Some(syn::GenericArgument::Type(inner_ty)) = args.args.first() { + return is_account_loader_type(inner_ty); + } + } + } + } + false + } + _ => false, + } +}sdk-libs/macros/src/light_pdas/account/traits.rs (1)
166-183: Size impl depends on Anchor'sSpacetrait—ensure all#[derive(Compressible)]/#[derive(LightCompressible)]types include#[derive(InitSpace)]and#[account].The generated
Sizeimplementation uses<Self as anchor_lang::Space>::INIT_SPACE(line 175), which requires theSpacetrait. While all documented examples pair these derives withInitSpace, the macro doesn't enforce this requirement. Verify that every use ofCompressibleorLightCompressiblealso includesInitSpaceand#[account], or implement a customSizetrait if used in non-Anchor contexts.
🤖 Fix all issues with AI agents
In `@sdk-libs/macros/docs/CLAUDE.md`:
- Around line 51-52: The fenced code block containing the ASCII macro hierarchy
entry "#[light_program] <- Program-level (light_program/)" should
include a language identifier (e.g., text) to satisfy markdown linters and
improve accessibility; update the fence surrounding that block so it starts with
three backticks plus an identifier (for example "```text") while leaving the
ASCII content unchanged.
In `@sdk-libs/macros/src/light_pdas/account/traits.rs`:
- Around line 263-282: The current validate_pod_compression_info_field only
checks for a field named compression_info but must also validate its type is the
non-optional CompressionInfo; update validate_pod_compression_info_field to
locate the Field with ident "compression_info" and then inspect Field.ty (a
syn::Type) to ensure it is a Type::Path whose final segment is "CompressionInfo"
(or a path that ends with light_compressible::compression_info::CompressionInfo)
and explicitly reject Option<...> (i.e., a Type::Path whose first/last segment
is "Option" with a generic arg) — if the type is missing or not the expected
non-optional CompressionInfo return Err with the same message; keep the existing
Ok(()) on success.
In `@sdk-libs/macros/src/light_pdas/program/compress.rs`:
- Around line 95-118: The zero-copy arm in compress_arms currently slices
data_borrow[8..8 + core::mem::size_of::<#name>()] without validating length,
which can panic on malformed accounts; update the block in the closure (the code
that uses qualify_type_with_crate and calls
light_sdk::interface::compress_account::prepare_account_for_compression_pod::<#name>)
to first compute let needed = 8 + core::mem::size_of::<#name>() and check
data_borrow.len() >= needed, returning an appropriate program error (mapped via
__anchor_to_program_error or the existing error mapping path) if the buffer is
too small, before taking the slice and proceeding to bytemuck::from_bytes and
prepare_account_for_compression_pod.
In `@sdk-libs/macros/src/light_pdas/program/instructions.rs`:
- Around line 312-336: The zero-copy branch in the data_verifications generation
unconditionally calls seeds.<field>.to_bytes(), which fails for non-Pubkey
fields; change the logic in the filter_map that builds data_verifications (using
is_zero_copy and field_str) to only call .to_bytes() when the field's type is
Pubkey (e.g., check a Pubkey field set on ctx_info such as
ctx_info.pubkey_field_names or inspect ctx_info.state_field_types for the
field_str); for non-Pubkey fields emit the direct comparison (if data.#field !=
seeds.#field ...) so zero_copy handling only converts Pubkey seeds to bytes
while other types are compared unchanged.
In `@sdk-libs/macros/src/light_pdas/program/variant_enum.rs`:
- Around line 710-716: The local magic constant MAX_SEEDS = 16 used to size
seed_refs (and related to seeds_vec) should be documented or replaced with a
shared constant; update the code in variant_enum.rs to either reference a
central constant (e.g., a shared SOLANA_PDA_MAX_SEEDS or PDA_MAX_SEEDS) instead
of declaring MAX_SEEDS inline, or add a comment explaining it is the Solana PDA
seed limit, and ensure seed_refs: [&[u8]; PDA_MAX_SEEDS] and the len calculation
use that shared symbol so the limit is obvious and maintainable.
- Around line 346-358: The match arms for compression_info_mut_opt currently
call panic!() for zero-copy unpacked and packed variants (in
LightAccountVariant::`#variant_name` and
LightAccountVariant::`#packed_variant_name`), which should instead return a
non-panicking sentinel: update the trait method compression_info_mut_opt (or its
implementation here) to return a fallible type (e.g., Option<&mut
Option<CompressionInfo>> or Result<Option<&mut Option<CompressionInfo>>,
YourError> where HasCompressionInfo is referenced) and change these arms to
return the sentinel (None or Err(...)) rather than panic; if the panic behavior
is intentional, add a doc comment on compression_info_mut_opt and the
LightAccountVariant match arms documenting the exact misuse conditions that
cause panic.
In `@sdk-libs/macros/src/light_pdas/README.md`:
- Line 40: The section header "RentFree Program Macro" is outdated; update it to
match the renamed macro by changing the header to reference the #[light_program]
attribute macro (e.g., "program/ - #[light_program] Program Macro" or similar)
so the header and the subsequent documentation for the `#[light_program]`
attribute macro are consistent; edit the README's "program/" section header to
reflect #[light_program] and ensure wording matches the existing description
below.
In `@sdk-libs/sdk/src/interface/compress_account.rs`:
- Around line 218-276: The code reads and writes Pod slices without validating
lengths, which can panic; before taking compression_info_bytes and creating
pod_data you must check that account_data.len() (or
bytemuck::bytes_of(account_data).len()) is >= A::COMPRESSION_INFO_OFFSET +
size_of::<SdkCompressionInfo>() and that account_info.try_borrow_mut_data()? has
length >= discriminator_len + A::COMPRESSION_INFO_OFFSET +
size_of::<SdkCompressionInfo>(); if either check fails return
Err(LightSdkError::ConstraintViolation.into()); update locations referencing
SdkCompressionInfo, A::COMPRESSION_INFO_OFFSET, account_data, account_info,
discriminator_len and pod_data to perform these bounds checks before slicing or
copying.
In `@sdk-libs/sdk/src/interface/compression_info.rs`:
- Around line 595-596: The CPI transfer uses an incorrect program id: replace
the placeholder Pubkey::default() in the Instruction construction (the
transfer_instruction's program_id field) with the real System Program ID, e.g.
solana_program::system_program::ID (or the literal
"11111111111111111111111111111111") so the Instruction.program_id points to the
System Program rather than the all-zero pubkey.
In `@sdk-libs/token-sdk/src/compressible/decompress_runtime.rs`:
- Around line 251-254: In decompress_full_token_accounts_with_indices, guard the
CPI-context insertion on has_prior_context so we don't shift account ordering:
only call cpi_accounts.cpi_context() and push its clone into all_account_infos
when has_prior_context is true (i.e., wrap the existing
cpi_accounts.cpi_context() block with a has_prior_context check) to keep indices
aligned for downstream CPI handling.
- Around line 217-225: Replace the brittle index-based access
cpi_accounts.account_infos().first() with the dedicated accessor
cpi_accounts.light_system_program()? so the code uses the enum-based index
method used elsewhere (see registered_program_pda() and
account_compression_authority()); update the all_account_infos.push call to push
the result of cpi_accounts.light_system_program()? and propagate the ?-based
error handling to match the surrounding pattern.
| ``` | ||
| #[rentfree_program] <- Program-level (rentfree_program/) | ||
| #[light_program] <- Program-level (light_program/) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider adding a language identifier to the fenced code block.
The macro hierarchy diagram uses a plain fenced code block. While it's an ASCII diagram (not executable code), adding a language identifier like text or leaving it empty with triple backticks satisfies markdown linters and improves accessibility for screen readers.
📝 Suggested fix
-```
+```text
#[light_program] <- Program-level (light_program/)🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
51-51: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@sdk-libs/macros/docs/CLAUDE.md` around lines 51 - 52, The fenced code block
containing the ASCII macro hierarchy entry "#[light_program] <-
Program-level (light_program/)" should include a language identifier (e.g.,
text) to satisfy markdown linters and improve accessibility; update the fence
surrounding that block so it starts with three backticks plus an identifier (for
example "```text") while leaving the ASCII content unchanged.
| /// Validates that the struct has a `compression_info` field for Pod types. | ||
| /// Unlike Borsh version, the field type is `CompressionInfo` (not `Option<CompressionInfo>`). | ||
| /// Returns `Ok(())` if found, `Err` if missing. | ||
| fn validate_pod_compression_info_field( | ||
| fields: &Punctuated<Field, Token![,]>, | ||
| struct_name: &Ident, | ||
| ) -> Result<()> { | ||
| let has_compression_info = fields | ||
| .iter() | ||
| .any(|f| f.ident.as_ref().is_some_and(|name| name == "compression_info")); | ||
|
|
||
| if !has_compression_info { | ||
| return Err(syn::Error::new_spanned( | ||
| struct_name, | ||
| "Pod struct must have a 'compression_info: CompressionInfo' field (non-optional). \ | ||
| For Pod types, use `light_compressible::compression_info::CompressionInfo`.", | ||
| )); | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pod validation should enforce compression_info: CompressionInfo type.
Right now the check only matches the field name; Option<CompressionInfo> would pass and generate an incorrect offset/layout for Pod usage.
🛠️ Proposed fix (type validation)
- let has_compression_info = fields
- .iter()
- .any(|f| f.ident.as_ref().is_some_and(|name| name == "compression_info"));
-
- if !has_compression_info {
+ let compression_info_field = fields
+ .iter()
+ .find(|f| f.ident.as_ref().is_some_and(|name| name == "compression_info"));
+
+ if let Some(field) = compression_info_field {
+ let is_compression_info = matches!(
+ &field.ty,
+ syn::Type::Path(tp)
+ if tp.path.segments.last().map(|s| s.ident == "CompressionInfo").unwrap_or(false)
+ );
+ if !is_compression_info {
+ return Err(syn::Error::new_spanned(
+ &field.ty,
+ "Pod struct must use `compression_info: CompressionInfo` (non-optional) for Pod layout.",
+ ));
+ }
+ return Ok(());
+ }
+
+ if compression_info_field.is_none() {
return Err(syn::Error::new_spanned(
struct_name,
"Pod struct must have a 'compression_info: CompressionInfo' field (non-optional). \
For Pod types, use `light_compressible::compression_info::CompressionInfo`.",
));
}
- Ok(())
+ Ok(())🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/account/traits.rs` around lines 263 - 282, The
current validate_pod_compression_info_field only checks for a field named
compression_info but must also validate its type is the non-optional
CompressionInfo; update validate_pod_compression_info_field to locate the Field
with ident "compression_info" and then inspect Field.ty (a syn::Type) to ensure
it is a Type::Path whose final segment is "CompressionInfo" (or a path that ends
with light_compressible::compression_info::CompressionInfo) and explicitly
reject Option<...> (i.e., a Type::Path whose first/last segment is "Option" with
a generic arg) — if the type is missing or not the expected non-optional
CompressionInfo return Err with the same message; keep the existing Ok(()) on
success.
| let compress_arms: Vec<_> = self.accounts.iter().map(|info| { | ||
| let name = qualify_type_with_crate(&info.account_type); | ||
|
|
||
| if info.is_zero_copy { | ||
| // Pod (zero-copy) path: use bytemuck instead of Borsh | ||
| quote! { | ||
| d if d == #name::LIGHT_DISCRIMINATOR => { | ||
| drop(data); | ||
| let data_borrow = account_info.try_borrow_data().map_err(__anchor_to_program_error)?; | ||
| // Skip 8-byte discriminator and read Pod data directly | ||
| let pod_bytes = &data_borrow[8..8 + core::mem::size_of::<#name>()]; | ||
| let mut account_data: #name = *bytemuck::from_bytes(pod_bytes); | ||
| drop(data_borrow); | ||
|
|
||
| let compressed_info = light_sdk::interface::compress_account::prepare_account_for_compression::<#name>( | ||
| program_id, | ||
| account_info, | ||
| &mut account_data, | ||
| meta, | ||
| cpi_accounts, | ||
| &compression_config.address_space, | ||
| )?; | ||
| Ok(Some(compressed_info)) | ||
| let compressed_info = light_sdk::interface::compress_account::prepare_account_for_compression_pod::<#name>( | ||
| program_id, | ||
| account_info, | ||
| &mut account_data, | ||
| meta, | ||
| cpi_accounts, | ||
| &compression_config.address_space, | ||
| )?; | ||
| Ok(Some(compressed_info)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard Pod slice bounds to avoid on-chain panics.
The zero-copy arm slices data_borrow[8..8+size_of] without checking length. A malformed/undersized account will panic instead of returning a program error.
🐛 Proposed fix (bounds check before slicing)
- // Skip 8-byte discriminator and read Pod data directly
- let pod_bytes = &data_borrow[8..8 + core::mem::size_of::<#name>()];
+ // Skip 8-byte discriminator and read Pod data directly
+ let expected_len = 8 + core::mem::size_of::<#name>();
+ if data_borrow.len() < expected_len {
+ return Err(solana_program_error::ProgramError::InvalidAccountData);
+ }
+ let pod_bytes = &data_borrow[8..expected_len];🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 95 - 118,
The zero-copy arm in compress_arms currently slices data_borrow[8..8 +
core::mem::size_of::<#name>()] without validating length, which can panic on
malformed accounts; update the block in the closure (the code that uses
qualify_type_with_crate and calls
light_sdk::interface::compress_account::prepare_account_for_compression_pod::<#name>)
to first compute let needed = 8 + core::mem::size_of::<#name>() and check
data_borrow.len() >= needed, returning an appropriate program error (mapped via
__anchor_to_program_error or the existing error mapping path) if the buffer is
too small, before taking the slice and proceeding to bytemuck::from_bytes and
prepare_account_for_compression_pod.
| }).collect(); | ||
| // Only generate verifications for data fields that exist on the state struct | ||
| // For zero_copy accounts, convert Pubkey to bytes for comparison | ||
| let is_zero_copy = ctx_info.is_zero_copy; | ||
| let data_verifications: Vec<_> = data_fields.iter().filter_map(|field| { | ||
| let field_str = field.to_string(); | ||
| // Skip fields that don't exist on the state struct (e.g., params-only seeds) | ||
| if !ctx_info.state_field_names.contains(&field_str) { | ||
| return None; | ||
| } | ||
| Some(quote! { | ||
| if data.#field != seeds.#field { | ||
| return std::result::Result::Err(LightInstructionError::SeedMismatch.into()); | ||
| } | ||
| }) | ||
| if is_zero_copy { | ||
| // For zero_copy accounts, Pod types use [u8; 32] instead of Pubkey, | ||
| // so convert the seed's Pubkey to bytes for comparison | ||
| Some(quote! { | ||
| if data.#field != seeds.#field.to_bytes() { | ||
| return std::result::Result::Err(LightInstructionError::SeedMismatch.into()); | ||
| } | ||
| }) | ||
| } else { | ||
| Some(quote! { | ||
| if data.#field != seeds.#field { | ||
| return std::result::Result::Err(LightInstructionError::SeedMismatch.into()); | ||
| } | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Zero‑copy seed verification breaks for non‑Pubkey data fields.
The zero‑copy branch unconditionally calls seeds.<field>.to_bytes(). For u64 (or other non‑Pubkey) seed fields this won’t compile and/or compares the wrong types. Gate the bytes conversion on Pubkey fields only.
🛠️ Proposed fix (only convert Pubkey seeds)
- if is_zero_copy {
- // For zero_copy accounts, Pod types use [u8; 32] instead of Pubkey,
- // so convert the seed's Pubkey to bytes for comparison
- Some(quote! {
- if data.#field != seeds.#field.to_bytes() {
- return std::result::Result::Err(LightInstructionError::SeedMismatch.into());
- }
- })
- } else {
+ let is_pubkey_seed = instruction_data_types
+ .get(&field_str)
+ .map(|ty| matches!(
+ ty,
+ syn::Type::Path(tp)
+ if tp.path.segments.last().map(|s| s.ident == "Pubkey").unwrap_or(false)
+ ))
+ .unwrap_or(false);
+ if is_zero_copy && is_pubkey_seed {
+ // Pod Pubkey fields are [u8; 32] – compare against bytes
+ Some(quote! {
+ if data.#field != seeds.#field.to_bytes() {
+ return std::result::Result::Err(LightInstructionError::SeedMismatch.into());
+ }
+ })
+ } else {
Some(quote! {
if data.#field != seeds.#field {
return std::result::Result::Err(LightInstructionError::SeedMismatch.into());
}
})
}🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/instructions.rs` around lines 312 -
336, The zero-copy branch in the data_verifications generation unconditionally
calls seeds.<field>.to_bytes(), which fails for non-Pubkey fields; change the
logic in the filter_map that builds data_verifications (using is_zero_copy and
field_str) to only call .to_bytes() when the field's type is Pubkey (e.g., check
a Pubkey field set on ctx_info such as ctx_info.pubkey_field_names or inspect
ctx_info.state_field_types for the field_str); for non-Pubkey fields emit the
direct comparison (if data.#field != seeds.#field ...) so zero_copy handling
only converts Pubkey seeds to bytes while other types are compared unchanged.
| if info.is_zero_copy { | ||
| quote! { | ||
| LightAccountVariant::#variant_name { .. } => panic!("compression_info_mut_opt not supported on zero_copy unpacked variants"), | ||
| LightAccountVariant::#packed_variant_name(_) => panic!("compression_info_mut_opt not supported on packed variants"), | ||
| } | ||
| } else { | ||
| let inner_type = qualify_type_with_crate(&info.inner_type); | ||
| quote! { | ||
| LightAccountVariant::#variant_name { data, .. } => <#inner_type as light_sdk::interface::HasCompressionInfo>::compression_info_mut_opt(data), | ||
| LightAccountVariant::#packed_variant_name(_) => panic!("compression_info_mut_opt not supported on packed variants"), | ||
| } | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider returning Result instead of panic!() for unsupported variants.
The compression_info_mut_opt method uses panic!() for zero-copy unpacked variants and packed variants. While this might be intentional to catch programming errors, it could cause unexpected runtime crashes. Consider returning a sentinel value or changing the trait signature to return Option<&mut Option<CompressionInfo>>.
// Current: panics at runtime
LightAccountVariant::`#variant_name` { .. } => panic!("compression_info_mut_opt not supported on zero_copy unpacked variants"),If this is intentional (to catch misuse during development), adding a doc comment explaining the panic conditions would help users understand the contract.
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/variant_enum.rs` around lines 346 -
358, The match arms for compression_info_mut_opt currently call panic!() for
zero-copy unpacked and packed variants (in LightAccountVariant::`#variant_name`
and LightAccountVariant::`#packed_variant_name`), which should instead return a
non-panicking sentinel: update the trait method compression_info_mut_opt (or its
implementation here) to return a fallible type (e.g., Option<&mut
Option<CompressionInfo>> or Result<Option<&mut Option<CompressionInfo>>,
YourError> where HasCompressionInfo is referenced) and change these arms to
return the sentinel (None or Err(...)) rather than panic; if the panic behavior
is intentional, add a doc comment on compression_info_mut_opt and the
LightAccountVariant match arms documenting the exact misuse conditions that
cause panic.
| - **parse.rs** - Parses `#[light_account(init)]`, `#[light_account(token)]`, `#[light_account(init)]` attributes | ||
| - **codegen.rs** - Generates `LightPreInit` and `LightFinalize` trait implementations | ||
|
|
||
| ### `program/` - RentFree Program Macro |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update section header to match the renamed macro.
The section header still says "RentFree Program Macro," but line 42 correctly documents this as the #[light_program] attribute macro. Update the header for consistency.
📝 Suggested section header update
-### `program/` - RentFree Program Macro
+### `program/` - Light Program Macro📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### `program/` - RentFree Program Macro | |
| ### `program/` - Light Program Macro |
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/README.md` at line 40, The section header
"RentFree Program Macro" is outdated; update it to match the renamed macro by
changing the header to reference the #[light_program] attribute macro (e.g.,
"program/ - #[light_program] Program Macro" or similar) so the header and the
subsequent documentation for the `#[light_program]` attribute macro are
consistent; edit the README's "program/" section header to reflect
#[light_program] and ensure wording matches the existing description below.
| // Access the SDK compression info field directly (24 bytes) | ||
| let compression_info_offset = A::COMPRESSION_INFO_OFFSET; | ||
| let account_bytes = bytemuck::bytes_of(account_data); | ||
| let compression_info_bytes = | ||
| &account_bytes[compression_info_offset..compression_info_offset + core::mem::size_of::<SdkCompressionInfo>()]; | ||
| let sdk_ci: &SdkCompressionInfo = bytemuck::from_bytes(compression_info_bytes); | ||
|
|
||
| let last_claimed_slot = sdk_ci.last_claimed_slot; | ||
| let rent_cfg = sdk_ci.rent_config; | ||
| let state = AccountRentState { | ||
| num_bytes: bytes, | ||
| current_slot, | ||
| current_lamports, | ||
| last_claimed_slot, | ||
| }; | ||
| if state | ||
| .is_compressible(&rent_cfg, rent_exemption_lamports) | ||
| .is_none() | ||
| { | ||
| msg!( | ||
| "prepare_account_for_compression_pod failed: \ | ||
| Account is not compressible by rent function. \ | ||
| slot: {}, lamports: {}, bytes: {}, rent_exemption_lamports: {}, last_claimed_slot: {}, rent_config: {:?}", | ||
| current_slot, | ||
| current_lamports, | ||
| bytes, | ||
| rent_exemption_lamports, | ||
| last_claimed_slot, | ||
| rent_cfg | ||
| ); | ||
| return Err(LightSdkError::ConstraintViolation.into()); | ||
| } | ||
|
|
||
| // Set compression state to compressed in the account data | ||
| // We need to modify the Pod struct in place | ||
| { | ||
| let mut data = account_info | ||
| .try_borrow_mut_data() | ||
| .map_err(|_| LightSdkError::ConstraintViolation)?; | ||
|
|
||
| // Skip discriminator (8 bytes) to get to the Pod data | ||
| let discriminator_len = A::LIGHT_DISCRIMINATOR.len(); | ||
| let pod_data = &mut data[discriminator_len..]; | ||
|
|
||
| // Mark as compressed using SDK CompressionInfo (24 bytes) | ||
| let compressed_info = SdkCompressionInfo { | ||
| last_claimed_slot: sdk_ci.last_claimed_slot, | ||
| lamports_per_write: sdk_ci.lamports_per_write, | ||
| config_version: sdk_ci.config_version, | ||
| state: CompressionState::Compressed, // Mark as compressed | ||
| _padding: 0, | ||
| rent_config: sdk_ci.rent_config, | ||
| }; | ||
|
|
||
| let info_bytes = bytemuck::bytes_of(&compressed_info); | ||
| let offset = A::COMPRESSION_INFO_OFFSET; | ||
| let end = offset + core::mem::size_of::<SdkCompressionInfo>(); | ||
| pod_data[offset..end].copy_from_slice(info_bytes); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add bounds checks before slicing Pod buffers.
Both the compression-info slice and the discriminator-stripped Pod slice can panic if the account size or offset is wrong. Return ConstraintViolation instead of panicking.
🐛 Proposed fix (bounds checks for Pod slices)
- let compression_info_bytes =
- &account_bytes[compression_info_offset..compression_info_offset + core::mem::size_of::<SdkCompressionInfo>()];
+ let end = compression_info_offset + core::mem::size_of::<SdkCompressionInfo>();
+ if end > account_bytes.len() {
+ msg!("CompressionInfo offset out of bounds for Pod type");
+ return Err(LightSdkError::ConstraintViolation.into());
+ }
+ let compression_info_bytes = &account_bytes[compression_info_offset..end];
@@
- // Skip discriminator (8 bytes) to get to the Pod data
+ // Skip discriminator (8 bytes) to get to the Pod data
let discriminator_len = A::LIGHT_DISCRIMINATOR.len();
- let pod_data = &mut data[discriminator_len..];
+ let expected_len = discriminator_len + core::mem::size_of::<A>();
+ if data.len() < expected_len {
+ msg!("Account data too small for Pod type");
+ return Err(LightSdkError::ConstraintViolation.into());
+ }
+ let pod_data = &mut data[discriminator_len..expected_len];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Access the SDK compression info field directly (24 bytes) | |
| let compression_info_offset = A::COMPRESSION_INFO_OFFSET; | |
| let account_bytes = bytemuck::bytes_of(account_data); | |
| let compression_info_bytes = | |
| &account_bytes[compression_info_offset..compression_info_offset + core::mem::size_of::<SdkCompressionInfo>()]; | |
| let sdk_ci: &SdkCompressionInfo = bytemuck::from_bytes(compression_info_bytes); | |
| let last_claimed_slot = sdk_ci.last_claimed_slot; | |
| let rent_cfg = sdk_ci.rent_config; | |
| let state = AccountRentState { | |
| num_bytes: bytes, | |
| current_slot, | |
| current_lamports, | |
| last_claimed_slot, | |
| }; | |
| if state | |
| .is_compressible(&rent_cfg, rent_exemption_lamports) | |
| .is_none() | |
| { | |
| msg!( | |
| "prepare_account_for_compression_pod failed: \ | |
| Account is not compressible by rent function. \ | |
| slot: {}, lamports: {}, bytes: {}, rent_exemption_lamports: {}, last_claimed_slot: {}, rent_config: {:?}", | |
| current_slot, | |
| current_lamports, | |
| bytes, | |
| rent_exemption_lamports, | |
| last_claimed_slot, | |
| rent_cfg | |
| ); | |
| return Err(LightSdkError::ConstraintViolation.into()); | |
| } | |
| // Set compression state to compressed in the account data | |
| // We need to modify the Pod struct in place | |
| { | |
| let mut data = account_info | |
| .try_borrow_mut_data() | |
| .map_err(|_| LightSdkError::ConstraintViolation)?; | |
| // Skip discriminator (8 bytes) to get to the Pod data | |
| let discriminator_len = A::LIGHT_DISCRIMINATOR.len(); | |
| let pod_data = &mut data[discriminator_len..]; | |
| // Mark as compressed using SDK CompressionInfo (24 bytes) | |
| let compressed_info = SdkCompressionInfo { | |
| last_claimed_slot: sdk_ci.last_claimed_slot, | |
| lamports_per_write: sdk_ci.lamports_per_write, | |
| config_version: sdk_ci.config_version, | |
| state: CompressionState::Compressed, // Mark as compressed | |
| _padding: 0, | |
| rent_config: sdk_ci.rent_config, | |
| }; | |
| let info_bytes = bytemuck::bytes_of(&compressed_info); | |
| let offset = A::COMPRESSION_INFO_OFFSET; | |
| let end = offset + core::mem::size_of::<SdkCompressionInfo>(); | |
| pod_data[offset..end].copy_from_slice(info_bytes); | |
| } | |
| // Access the SDK compression info field directly (24 bytes) | |
| let compression_info_offset = A::COMPRESSION_INFO_OFFSET; | |
| let account_bytes = bytemuck::bytes_of(account_data); | |
| let end = compression_info_offset + core::mem::size_of::<SdkCompressionInfo>(); | |
| if end > account_bytes.len() { | |
| msg!("CompressionInfo offset out of bounds for Pod type"); | |
| return Err(LightSdkError::ConstraintViolation.into()); | |
| } | |
| let compression_info_bytes = &account_bytes[compression_info_offset..end]; | |
| let sdk_ci: &SdkCompressionInfo = bytemuck::from_bytes(compression_info_bytes); | |
| let last_claimed_slot = sdk_ci.last_claimed_slot; | |
| let rent_cfg = sdk_ci.rent_config; | |
| let state = AccountRentState { | |
| num_bytes: bytes, | |
| current_slot, | |
| current_lamports, | |
| last_claimed_slot, | |
| }; | |
| if state | |
| .is_compressible(&rent_cfg, rent_exemption_lamports) | |
| .is_none() | |
| { | |
| msg!( | |
| "prepare_account_for_compression_pod failed: \ | |
| Account is not compressible by rent function. \ | |
| slot: {}, lamports: {}, bytes: {}, rent_exemption_lamports: {}, last_claimed_slot: {}, rent_config: {:?}", | |
| current_slot, | |
| current_lamports, | |
| bytes, | |
| rent_exemption_lamports, | |
| last_claimed_slot, | |
| rent_cfg | |
| ); | |
| return Err(LightSdkError::ConstraintViolation.into()); | |
| } | |
| // Set compression state to compressed in the account data | |
| // We need to modify the Pod struct in place | |
| { | |
| let mut data = account_info | |
| .try_borrow_mut_data() | |
| .map_err(|_| LightSdkError::ConstraintViolation)?; | |
| // Skip discriminator (8 bytes) to get to the Pod data | |
| let discriminator_len = A::LIGHT_DISCRIMINATOR.len(); | |
| let expected_len = discriminator_len + core::mem::size_of::<A>(); | |
| if data.len() < expected_len { | |
| msg!("Account data too small for Pod type"); | |
| return Err(LightSdkError::ConstraintViolation.into()); | |
| } | |
| let pod_data = &mut data[discriminator_len..expected_len]; | |
| // Mark as compressed using SDK CompressionInfo (24 bytes) | |
| let compressed_info = SdkCompressionInfo { | |
| last_claimed_slot: sdk_ci.last_claimed_slot, | |
| lamports_per_write: sdk_ci.lamports_per_write, | |
| config_version: sdk_ci.config_version, | |
| state: CompressionState::Compressed, // Mark as compressed | |
| _padding: 0, | |
| rent_config: sdk_ci.rent_config, | |
| }; | |
| let info_bytes = bytemuck::bytes_of(&compressed_info); | |
| let offset = A::COMPRESSION_INFO_OFFSET; | |
| let end = offset + core::mem::size_of::<SdkCompressionInfo>(); | |
| pod_data[offset..end].copy_from_slice(info_bytes); | |
| } |
🤖 Prompt for AI Agents
In `@sdk-libs/sdk/src/interface/compress_account.rs` around lines 218 - 276, The
code reads and writes Pod slices without validating lengths, which can panic;
before taking compression_info_bytes and creating pod_data you must check that
account_data.len() (or bytemuck::bytes_of(account_data).len()) is >=
A::COMPRESSION_INFO_OFFSET + size_of::<SdkCompressionInfo>() and that
account_info.try_borrow_mut_data()? has length >= discriminator_len +
A::COMPRESSION_INFO_OFFSET + size_of::<SdkCompressionInfo>(); if either check
fails return Err(LightSdkError::ConstraintViolation.into()); update locations
referencing SdkCompressionInfo, A::COMPRESSION_INFO_OFFSET, account_data,
account_info, discriminator_len and pod_data to perform these bounds checks
before slicing or copying.
| let transfer_instruction = Instruction { | ||
| program_id: Pubkey::from(SYSTEM_PROGRAM_ID), | ||
| program_id: Pubkey::default(), // System Program ID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Incorrect System Program ID will cause CPI failure.
Using Pubkey::default() (all zeros) as the System Program ID is incorrect. The System Program has a specific ID (11111111111111111111111111111111). This will cause the CPI transfer to fail at runtime.
🐛 Proposed fix
+use solana_system_interface::program::ID as SYSTEM_PROGRAM_ID;
let transfer_instruction = Instruction {
- program_id: Pubkey::default(), // System Program ID
+ program_id: SYSTEM_PROGRAM_ID,
accounts: vec![Or use the constant from solana-program if that's already in scope:
program_id: solana_program::system_program::ID,🤖 Prompt for AI Agents
In `@sdk-libs/sdk/src/interface/compression_info.rs` around lines 595 - 596, The
CPI transfer uses an incorrect program id: replace the placeholder
Pubkey::default() in the Instruction construction (the transfer_instruction's
program_id field) with the real System Program ID, e.g.
solana_program::system_program::ID (or the literal
"11111111111111111111111111111111") so the Instruction.program_id points to the
System Program rather than the all-zero pubkey.
| // Add required system accounts for transfer2 instruction | ||
| // Light system program is at index 0 in the cpi_accounts slice | ||
| all_account_infos.push( | ||
| cpi_accounts | ||
| .account_infos() | ||
| .first() | ||
| .ok_or(ProgramError::NotEnoughAccountKeys)? | ||
| .clone(), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for CpiAccounts struct definition and its methods
rg "struct CpiAccounts|impl CpiAccounts" -A 5 -B 2 -g '*.rs'Repository: Lightprotocol/light-protocol
Length of output: 9576
🏁 Script executed:
# Check for light_system_program accessor or similar methods
rg "light_system_program|fn account_infos" -B 2 -A 3 -g '*.rs'Repository: Lightprotocol/light-protocol
Length of output: 50385
🏁 Script executed:
# Look at the actual file context around the problematic lines
head -n 230 sdk-libs/token-sdk/src/compressible/decompress_runtime.rs | tail -n 30Repository: Lightprotocol/light-protocol
Length of output: 1451
🏁 Script executed:
# Search for other usages of cpi_accounts.account_infos().first() pattern
rg "account_infos\(\)\.first\(\)" -B 3 -A 1 -g '*.rs'Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Check CpiAccounts implementation for dedicated accessor methods
cat sdk-libs/sdk-types/src/cpi_accounts/v1.rs | grep -A 5 "pub fn"Repository: Lightprotocol/light-protocol
Length of output: 5691
🏁 Script executed:
# Check v2 as well
cat sdk-libs/sdk-types/src/cpi_accounts/v2.rs | grep -A 5 "pub fn"Repository: Lightprotocol/light-protocol
Length of output: 4933
🏁 Script executed:
# Search for any CpiAccountsTrait methods
rg "trait CpiAccountsTrait" -A 20 -g '*.rs'Repository: Lightprotocol/light-protocol
Length of output: 2671
🏁 Script executed:
# Check the exact context and see what the fixed version should look like
sed -n '210,235p' sdk-libs/token-sdk/src/compressible/decompress_runtime.rsRepository: Lightprotocol/light-protocol
Length of output: 1132
🏁 Script executed:
# Confirm the broader context for the replacement
sed -n '205,240p' sdk-libs/token-sdk/src/compressible/decompress_runtime.rsRepository: Lightprotocol/light-protocol
Length of output: 1550
Use the dedicated light_system_program() accessor instead of relying on index ordering.
The CpiAccounts type provides a light_system_program() method that uses enum-based index access with proper error handling. Replace cpi_accounts.account_infos().first() with cpi_accounts.light_system_program()? to match the pattern already used for registered_program_pda() and account_compression_authority() in the same block. This eliminates the brittle index assumption and provides consistent error handling throughout.
🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/compressible/decompress_runtime.rs` around lines 217 -
225, Replace the brittle index-based access cpi_accounts.account_infos().first()
with the dedicated accessor cpi_accounts.light_system_program()? so the code
uses the enum-based index method used elsewhere (see registered_program_pda()
and account_compression_authority()); update the all_account_infos.push call to
push the result of cpi_accounts.light_system_program()? and propagate the
?-based error handling to match the surrounding pattern.
| // Add CPI context if present | ||
| if let Ok(cpi_context) = cpi_accounts.cpi_context() { | ||
| all_account_infos.push(cpi_context.clone()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gate CPI-context AccountInfo on has_prior_context to keep account order aligned.
decompress_full_token_accounts_with_indices omits the CPI-context meta when has_prior_context is false, but this block still inserts the context account if it exists. That shifts all subsequent accounts and can break CPI with mismatched indices.
🐛 Suggested fix
- // Add CPI context if present
- if let Ok(cpi_context) = cpi_accounts.cpi_context() {
- all_account_infos.push(cpi_context.clone());
- }
+ // Add CPI context only when the instruction includes it
+ if has_prior_context {
+ let cpi_context = cpi_accounts
+ .cpi_context()
+ .map_err(|_| ProgramError::InvalidAccountData)?;
+ all_account_infos.push(cpi_context.clone());
+ }🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/compressible/decompress_runtime.rs` around lines 251 -
254, In decompress_full_token_accounts_with_indices, guard the CPI-context
insertion on has_prior_context so we don't shift account ordering: only call
cpi_accounts.cpi_context() and push its clone into all_account_infos when
has_prior_context is true (i.e., wrap the existing cpi_accounts.cpi_context()
block with a has_prior_context check) to keep indices aligned for downstream CPI
handling.
| #[derive(PodCompressionInfoField)] | ||
| #[account(zero_copy)] | ||
| #[repr(C)] | ||
| pub struct ZeroCopyRecord { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use discriminator derive macro
| const LIGHT_DISCRIMINATOR_SLICE: &'static [u8] = &Self::LIGHT_DISCRIMINATOR; | ||
| } | ||
|
|
||
| impl Default for ZeroCopyRecord { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
derive default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename file
| } | ||
|
|
||
| // ============================================================================= | ||
| // LEGACY TRAITS (kept for backward compatibility during transition) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
| } | ||
|
|
||
| #[cfg(test)] | ||
| mod tests { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move to tests dir
| /// 2. The `COMPRESSION_INFO_OFFSET` matches the actual byte offset of the field | ||
| /// 3. The struct implements `bytemuck::Pod` and `bytemuck::Zeroable` | ||
| /// 4. The `compression_info` field uses SDK `CompressionInfo` (24 bytes) | ||
| pub trait PodCompressionInfoField: bytemuck::Pod { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try to unify
| unsafe impl bytemuck::Pod for CompressionState {} | ||
| unsafe impl bytemuck::Zeroable for CompressionState {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this correct?
|
|
||
| /// Simple field accessor trait for types with a `compression_info: Option<CompressionInfo>` field. | ||
| /// Implement this trait and get `HasCompressionInfo` for free via blanket impl. | ||
| pub trait CompressionInfoField { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move all traits of this file into separate files
Summary by CodeRabbit
New Features
Breaking Changes
#[rentfree_program]to#[light_program].Documentation
✏️ Tip: You can customize this high-level summary in your review settings.