llm/tests
2024-03-04 13:29:07 -08:00
..
conftest.py Upgrade to run against OpenAI >= 1.0 2024-01-25 22:00:44 -08:00
test-llm-load-plugins.sh Test using test-llm-load-plugins.sh 2024-01-25 17:44:34 -08:00
test_aliases.py Make tests robust against extra plugins, closes #258 2023-09-10 11:21:04 -07:00
test_chat.py Windows readline fix, plus run CI against macOS and Windows 2024-01-26 16:24:58 -08:00
test_cli_openai_models.py Upgrade to run against OpenAI >= 1.0 2024-01-25 22:00:44 -08:00
test_embed.py batch_size= argument to embed_multi(), refs #273 2023-09-13 16:24:04 -07:00
test_embed_cli.py Windows readline fix, plus run CI against macOS and Windows 2024-01-26 16:24:58 -08:00
test_encode_decode.py NumPy decoding docs, plus extra tests for llm.encode/decode 2023-09-14 14:01:47 -07:00
test_keys.py Windows readline fix, plus run CI against macOS and Windows 2024-01-26 16:24:58 -08:00
test_llm.py llm logs -r/--response option, closes #431 2024-03-04 13:29:07 -08:00
test_migrate.py Binary embeddings (#254) 2023-09-11 18:58:44 -07:00
test_plugins.py Moved iter_prompt from Response to Model, moved a lot of other stuff 2023-07-10 07:45:11 -07:00
test_templates.py Upgrade to run against OpenAI >= 1.0 2024-01-25 22:00:44 -08:00