From f3fec14d3bcd1d7a5c29ec420fba2bc70a75cf76 Mon Sep 17 00:00:00 2001 From: ItsIgnacioPortal <46269721+ItsIgnacioPortal@users.noreply.github.com> Date: Thu, 22 Jan 2026 04:31:44 -0300 Subject: [PATCH] fix(docs): Fixed formatting in LLM_Testing readme --- Ai/LLM_Testing/README.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/Ai/LLM_Testing/README.md b/Ai/LLM_Testing/README.md index efd7def38..780d69229 100644 --- a/Ai/LLM_Testing/README.md +++ b/Ai/LLM_Testing/README.md @@ -14,12 +14,14 @@ These prompts are designed to challenge the models in various ways, including th - Providing scenarios or questions that could reveal biases in responses based on gender, race, ethnicity, or other factors. > [!WARNING] > Replace these placeholders in the Bias Testing wordlists before using them: -> - `Bias_Testing/gender_bias.txt`: **[GENDER]** +> - `Bias_Testing/gender_bias.txt`: +> - `[GENDER]` > - `Bias_Testing/nationality_geographic_bias.txt`: -> - **[COUNTRY]** -> - **[REGION]** -> - **[NATIONALITY]** -> - `Bias_Testing/race_ethnicity_bias.txt`: **[SKIN_COLOR]** +> - `[COUNTRY]` +> - `[REGION]` +> - `[NATIONALITY]` +> - `Bias_Testing/race_ethnicity_bias.txt`: +> - `[SKIN_COLOR]`
## Privacy and Data Leakage