LLM Prompt Engineering for Developers The Art and Science of Unlocking LLMs' True Potential

"Explore the dynamic field of LLM prompt engineering with this book. Starting with fundamental NLP principles & progressing to sophisticated prompt engineering methods, this book serves as the perfect comprehensive guide. Key Features In-depth coverage of prompt engineering from basics to a...

Descripción completa

Detalles Bibliográficos
Otros Autores: El Amri, Aymen, author (author)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Birmingham : Packt Publishing, Limited 2024.
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009825887106719
Tabla de Contenidos:
  • Intro
  • Preface
  • What Are You Going to Learn?
  • To Whom is This Guide For?
  • Join the Community
  • About the Author
  • The Companion Toolkit
  • Your Feedback Matters
  • From NLP to Large Language Models
  • What is Natural Language Processing?
  • Language Models
  • Statistical Models (N-Grams)
  • Knowledge-Based Models
  • Contextual Language Models
  • Neural Network-Based Models
  • Feedforward Neural Networks
  • Recurrent Neural Networks (RNNs)
  • Long Short-Term Memory (LSTM)
  • Gated Recurrent Units (Grus)
  • Transformer Models
  • Bidirectional Encoder Representations from Transformers (BERT)
  • Generative pre-trained transformer (GPT)
  • What's Next?
  • Introduction to Prompt Engineering
  • OpenAI GPT and Prompting: An Introduction
  • Generative Pre-trained Transformers (GPT) Models
  • What Is GPT and How Is It Different from ChatGPT?
  • The GPT models series: a closer look
  • GPT-3.5
  • GPT-4
  • Other Models
  • API Usage vs. Web Interface
  • Tokens
  • Costs, Tokens, and Initial Prompts: How to Calculate the Cost of Using a Model
  • Prompting: How Does It Work?
  • Probability and Sampling: At the Heart of GPT
  • Understanding the API Parameters
  • Temperature
  • Top-p
  • Top-k
  • Sequence Length (max_tokens)
  • Presence Penalty (presence_penalty)
  • Frequency Penalty (frequency_penalty)
  • Number of Responses (n)
  • Best of (best_of)
  • OpenAI Official Examples
  • Using the API without Coding
  • Completion (Deprecated)
  • Chat
  • Insert (Deprecated)
  • Edit (Deprecated)
  • Setting Up the Environment
  • Choosing the Model
  • Choosing the Programming Language
  • Installing the Prerequisites
  • Installing the OpenAI Python library
  • Getting an OpenAI API key
  • A Hello World Example
  • Interactive Prompting
  • Interactive Prompting with Multiline Prompt
  • Few-Shot Learning and Chain of Thought
  • What Is Few-Shot Learning?
  • Zero-Shot vs Few-Shot Learning
  • Approaches to Few-Shot Learning
  • Prior Knowledge about Similarity
  • Prior Knowledge about Learning
  • Prior Knowledge of Data
  • Examples of Few-Shot Learning
  • Limitations of Few-Shot Learning
  • Chain of Thought (CoT)
  • Zero-Shot CoT Prompting
  • Auto Chain of Thought Prompting (AutoCoT)
  • Self-Consistency
  • Transfer Learning
  • What Is Transfer Learning?
  • Inductive Transfer
  • Transductive Transfer
  • Inductive vs. Transductive Transfer
  • Transfer Learning, Fine-Tuning, and Prompt Engineering
  • Fine-Tuning with a Prompt Dataset: A Practical Example
  • Why Is Prompt Engineering Vital for Transfer Learning and Fine-Tuning?
  • Perplexity as a Metric for Prompt Optimization
  • Avoid Surprising the Model
  • How to Calculate Perplexity?
  • A Practical Example with Betterprompt
  • Hack the Prompt
  • ReAct: Reason + Act
  • What Is It?
  • React Using Lanchain
  • General Knowledge Prompting
  • What Is General Knowledge Prompting?
  • Example of General Knowledge Prompting