Target Generation

Target Generation

Every TGA—that includes 6Gen, SixForest, 6GAN, and your own additions—implements the trait quartet defined in tga/src/lib.rs:

  • TgaInfo supplies static NAME and DESCRIPTION constants so the CLI and docs can describe the algorithm.
  • TgaGenerator is the runtime iterator that produces Address values; it must be Clone + Send and expose a generate() method.
  • TgaModel links training output to its generator via type Generator and the build(seed) constructor. Models must be Serialize + Deserialize + Display so they can be persisted and inspected.
  • TGA is the Clap-friendly configuration (clap::Args + Serialize + Deserialize). Its train method consumes an iterator of Address seeds and returns a TgaModel.
#[derive(clap::Args, Clone, Serialize, Deserialize)]
pub struct FooGen {
    #[arg(long, default_value_t = 16)]
    pub window: usize,
}

#[derive(Clone, Serialize, Deserialize)]
pub struct FooModel {
    patterns: Vec<AddressPattern>,
}

impl TGA for FooGen {
    type Model = FooModel;

    async fn train<T: IntoIterator<Item = Address>>(
        &self,
        seeds: T,
    ) -> Result<Self::Model, String> {
        let patterns = mine_patterns(seeds, self.window)?;
        Ok(FooModel { patterns })
    }
}

impl TgaModel for FooModel {
    type Generator = FooGenerator;

    fn build(self, seed: usize) -> FooGenerator {
        FooGenerator::new(self, seed)
    }
}

Registering the plugin

  1. Add your module to tga/src/lib.rs, re-export the config/model types, and extend ModelEnum / ModelEnumIterator / TgaEnum with new variants. Each match arm must forward to your training and generation logic.
  2. Update pyrmap/src/lib.rs so the Python bindings can construct the new config class and map JSON configs onto your type (see the existing PySixGenConfig implementation for reference).
  3. If you want the documentation to discover the algorithm automatically, add a page under site/content/docs/target-generation/ and reference it from the sidebar.

Best practices

  • Keep training asynchronous but CPU-bound: the train method runs inside a Tokio runtime, so expensive CPU work should use tokio::task::spawn_blocking or Rayon where appropriate.
  • Implement Display for your model to surface metrics in CLI output (rmap train writes the string to stdout).
  • Store enough metadata in the model to reproduce behaviour across releases; the CLI serializes models with bincode, so backwards-compatible versions should prefer additive changes.