Large-scale language models (LLMs) do not distinguish fact from fiction, do not assume the generated text to be true. Additionally, LLMs reflect the biases inherent to the systems they were trained on, they should not be deployed into systems that interact with humans. All LLMs should be approached with caution around use cases that are sensitive to biases around human attributes. Moreover, it is difficult to know what disciplined testing procedures can be applied to fully understand tehe capabilities of LLMs and how the data it is trained on influences its vast range of outputs. All examples listed on this page are provided for demo purposes only. Although efforts have been taken to filter and sanitize the output of the models, it may occasionally generate content that is racist, baised, adult or gibberish. meraGPT is not liable or responsible for any unintended use of these models or their outputs in downstream systems.