detector_benchmark.generation.generator¶

Classes¶

Module Contents¶

class detector_benchmark.generation.generator.LLMGenerator(model: transformers.AutoModelForCausalLM, model_config: detector_benchmark.utils.configs.ModelConfig)¶

Bases: torch.nn.Module

generator¶
tokenizer¶
device¶
gen_params¶
forward(samples: list, batch_size: int = 1, watermarking_scheme: detector_benchmark.watermark.auto_watermark.AutoWatermark | None = None) list[str]¶

Generate text from a list of input contexts.

Parameters:¶

samples: list

A list of input contexts for text generation.

batch_size: int

The batch size to use for generation. Defaults to 1.

watermarking_scheme: AutoWatermark

The watermarking scheme to use for generation. If provided, it should be an instance of LogitsProcessor. Defaults to None.

Returns:¶

list[str]

A list of generated texts.

forward_debug(samples: list, batch_size: int = 1, watermarking_scheme: detector_benchmark.watermark.auto_watermark.AutoWatermark | None = None) list[str]¶

Takes a list of input contexts and generates text using the model.

Parameters:¶

samples: list

A list of input contexts for text generation.

batch_size: int

The batch size to use for generation.

watermarking_scheme: LogitsProcessor

The watermarking scheme to use for generation.

Returns:¶

tuple

A tuple containing the generated texts, the raw logits, and the processed logits.