: A statistical model that learns patterns and relationships in text data to generate human-like text. Language Model A neural network architecture that uses self-attention mechanisms to process sequential data. Transformer: A type of language model that generates text based on patterns learned from pre-training on large text datasets. GPT : The process of adapting a pre-trained language model to a specific task or domain by training it on a smaller dataset.
Score: : A set of metrics used to evaluate the quality of summarization models. Score : The ability of a language model to generate grammatically correct and coherent text. Fluency The logical and consistent flow of ideas in the generated text. Coherence: The variety and uniqueness of the generated text, avoiding repetition and dullness. Diversity: A phenomenon where the language model generates plausible but factually incorrect information. Hallucination: The tendency of a language model to generate text that reflects societal biases present in the training data.
Score: A measure of how well the language model summarizes text. Score: How smoothly and naturally the language model's output flows. Fluency: How well the language model's output makes sense. Coherence: How varied and unique the language model's output is. Diversity: When the language model makes up information that isn't in the input text. Hallucination: When the language model's output reflects unfair or inaccurate stereotypes. Bias: When the language model's output is harmful or offensive.