sima_utils.transformer.model.language_model
Classes
Language model implementation. |
Module Contents
- class sima_utils.transformer.model.language_model.LanguageModel
Language model implementation.
The language model consists of a stack of transformer layers and some layers after the last transformer layer to obtain the output probability or next token index.
The implementation assumes a transformer is broken up into 3 parts. 1. PreCacheModel: Pre cache model implements the transformer layer up to the batched
matrix-multiply in the self-attention block.
- CacheModel: Cache model implements the self-attention block of a transformer layer without
the qkv projection.
- PostCacheModel: Post cache model implements the transformer layer after the self-attention
block, including the layers after the last transformer layer.
- gen_files(gen_mode: sima_utils.transformer.model.base.FileGenMode, *, precision: sima_utils.transformer.model.base.FileGenPrecision | dict[str, sima_utils.transformer.model.base.FileGenPrecision] | None = None, log_level: int = logging.NOTSET, num_processes: int = 1, part: str | None = None, part_idx: int | None = None, resume: bool = False)
Generates files based on the provided file generation mode.
- Parameters:
gen_mode – File generation mode.
Precision – The precision to be used for Model SDK quantization mode.
log_level – Logging level.
part – Name of the part to be generated.
part_idx – Specific index of the part to be generated. For pre/post model, the index is the layer index; for cache model, the index is the token index.
resume – Generate the files if missing.
- run_model(eval_mode: sima_utils.transformer.model.base.EvalMode, ifms: list[numpy.ndarray]) list[numpy.ndarray]
- calc_freq_real_imag(use_swa: bool) tuple[numpy.ndarray, numpy.ndarray]
- get_embeddings_tensor() numpy.ndarray